For the past month I’ve been working on a hand-cranked beat sequencer. This device grew out of a bunch of ideas I’ve had while working on my residency at DPI.
In short, the sequencer consists of two large, concentrically-mounted wooden wheels. One of the wheels rotates on an axle and the other which has sixteen sliding levers attached to its face remains stationary. As the front wheel rotates, a switch mounted on its back is depressed by levers on the back wheel which have been pushed toward the wheel’s center. Each time the rotating switch strikes a lever, a sample is triggered on the attached computer. It is possible to play repeating sequences of samples by rotating the wheel at a constant rate.
The following video shows the operation of an early cardboard prototype:
Since shooting this video almost a month ago, I have been building a more substantial prototype out of wood.
Tonight, I’ll be testing out the wheel with the software I wrote before I made the cardboard prototype.
One of the motions I’ve been trying to represent physically is the selection of a region of audio. There are any number of screen-based ways to do this, but I wanted to create something mechanical that could do this.
This idea came out of a discussion Shlomit and I were having last Friday evening about whether I was making a performance or an installation. She pointed out that I really needed to consider the performance from the audience’s perspective, so I considered flipping the whole thing around — facing the audience through a sheet of glass rather than standing with my back to them. During the performance, I could scratch at the back of a sheet of glass covered with black paint. The drawing tool would continue to generate some sort of audio. This performance concerns perception and revelation. As I scratch away paint to reveal the audience and the space which I cannot see at the beginning of the performance. If I trace the outlines I see, I will also be rendering a mirror of the audience and simultaneously revealing my image through the scratch marks.
I created this mockup so I could see what it looked like at full size from the audience perspective.
On Monday evening, Wendy Richmond visited to see what I’ve been working on. One thing she questioned, both in watching me, and in using the mechanism herself, was whether I was bothered by the way the ropes constrained me to a section of the board. I hadn’t been terribly bothered by it, but at the same time, was already been working out a way to get past it with the infrared-detection scheme. The question still remains – what place do ropes have in this work? If they do belong in this work, is the tension of the constraint also part of the work or something I need to remove?
Later in the week, though, I sketched one possible solution to the roop loop constraint.
By replacing the rope loops with a simple system of counterweights, I would be able to move the drawing tool freely across the entire drawing surface.
On Saturday, I put together a prototype, by cutting the rope loops and adding water-filled soda bottles as counterweights.
At first, the system had too much friction, but after adjusting the pulley locations and changing the amount of weight in the soda bottles, I found a good balance.
One thing I’ve been trying to work out is how to turn the sound on and off. For testing purposes, I’ve been using a footswitch that I step on whenever I start drawing. As I’ve been making larger drawings, the ropes have pulled the footswitch away from me.
Originally, I was planning to use a piezo-based contact mic mounted to the drawing surface to pick up the sound of the drawing tool, but my initial experiments with this technique did not yield good results. Although techniques I found for interfacing a piezo element with Arduino don’t include an amplifier, I have a feeling it will be necessary to boost the signal coming out of the piezo. I tried several piezo elements I had laying around — including a known good one, but I couldn’t get reliable data from it.
Hal suggested in his response to my post the other day that I might want to consider a pressure sensor, so today I’ve been prototyping a drawing tool holder which uses a force sensistive resistor to determine how hard I’m pressing the drawing tool against the drawing surface.
My first attempt needs some refinement, though. There needs to be a spring mechanism inside (like a click ballpoint pen has) to keep the drawing tool off of the sensor when I’m not drawing. In this first prototype, I used some packaging foam to hold the drawing tool in place. The undesired side effect is that the foam also holds the drawing tool securely against the force sensing resistor, so this prototype would never be able to tell me when the drawing tool is lifted.
This week I have focused on building the software which synthesizes audio. The software I used last year was very rudimentary; it could only map the position and direction of the pulleys’ movements onto audio file playback position and volume.
What I’ve been trying to do at the beginning of this residency is get the software to a point where I feel it has adequate expressive capabilities so that I can begin to try out the different mappings — and do more drawings.
At the beginning of this week, I worked towards mapping the existing data from the pulleys onto several different parameters in the Reason software synthesis environment. As I wrote earlier, there just wasn’t enough variety in the sounds I was generating to hold my interest.
To find another way to move forward, I turned my attention to the data — and came up with a number of ways to derive dynamic values from my existing sensing system.
Properties of Lines
Instead of simply treating the coordinates I receive from the sensors as changes in position, I could begin to record data about the emerging line — and its relationship to previously drawn lines.
Current line length – since I know how far the pen has moved horizontally and vertically, I know the distance it has travelled.
Running average of past line lengths – maybe it will be useful to know how the lengths of the lines have been changing over time.
Current line drawing time – it’s easy to figure out how long the pen was down. Once I know how long the pen has been down, I can also compute the speeds at which lines are being drawn. Maybe averaged data about these properties will also yield useful control values for generating audio.
“Not drawing” time – how long did I pause between drawing lines. Is my drawing proceeding at a “furious” pace or am I stopping to think a bit between making marks?
Line slopes – I want some way to tell if I’m drawing similar lines. I want visual repetition to translate into sonic repetition.
Line shapes: Loops – Am I drawing over the same line in order to darken it on the paper? The software should be able to sense this. I don’t know of a good computational approach to this, but I had the idea to look at the current line (pen-down to pen-up) as video.
If I render my digitized lines in a transparent color, they would darken over time and I could probably find a computer vision algorithm which tests for an enclosed shape (not sure if this is blob tracking or something else).
Line shapes: Similar forms – There are computer vision techniques for sensing the statistical similarity of an image to a reference image. I tried some experiments with this using cv.jit, but this didn’t yield useful results, yet.
Last night, after integrating the code I worked on during the day, I made another drawing.
Here are some video samples showing how it is currently working. Both examples are taken from the full length drawing session, in which I used a Max/MSP patch to control audio synthesis in Reason. The overall magnitude of the gestures I make controls the volume of the sound and the separate horizonal and vertical magnitude of the gestures controls filtering parameters.
I’m trying to understand the correlation between the drawing tool and the sound it makes (both in terms of texture and value). This drawing tool is a stick of hard graphite. I chose a airy sound which augments the natural sound of the tool on the drawing surface.
Here, the drawing tool is compressed charcoal. It makes much darker marks. Just for the record, my goal here is not to make light-saber sounds as I draw.
My initial reactions to last nights drawing:
The sounds the system produces are not expressive enough. It’s only entertaining for a short period of time. I want to be able to shape them more over time.
The interface needs to be more sensitive at lower speeds (when I’m drawing slowly, I don’t hear any variation in the sound).
The combination of the drawing tool and the sound it produces is important to me.
I need a way to iterate through ideas a bit more rapidly — my software “toolbox” in this iteration of the patch seems a bit more limited — or perhaps I’m feeling more squeezed for time than I was when I was in school.
Today, I’ve been researching ways to enhance the expressiveness of the sounds as I draw.
Currently I’m using a foot pedal to turn the sound on when I start drawing, but I would really like the sound to start and stop automatically when I touch the drawing surface. This morning, I was experimenting with a piezo element to detect the sound of a drawing tool on the drawing surface, but didn’t get good results.
I’m also interested in finding ways to differentiate sonically between long flowing lines and short jagged lines. This afternoon I played a bit with a computer vision library of Jitter (cv.jit) in the hopes that I might be able interpret the position data from the ropes graphically. The library contains some pattern learning features, but I’ve only played with it for a few hours now. Eventually, I would like to be able to distinguish areas of loops as well.
Today was the first day of my artistic residency at the Digital Performance Institute. This morning, I cut the bracket pieces in the shop at DeBaun Auditorium, and then Kelly and I came into the city to buy the lumber I needed to make my drawing surface.
After an incident (!!!) involving the mirror in the lobby, I loaded my materials and the rope&pulley equipment into the space.
After dinner, I came back to assemble the drawing surface. Construction went smoothly, but I wish I had brought my carpenter’s square and level. I improvised using my digital caliper and hacked together a plumb bob with one of my wrenches so I could level out the support poles.
Tomorrow I’m going to set up the rope&pulley equipment and see if I can get it making some sound…