learning the ropes

things I made at ITP and after: sketches, prototypes, and other documentation

Thursday, April 23, 2009

Properties of Lines

This week I have focused on building the software which synthesizes audio. The software I used last year was very rudimentary; it could only map the position and direction of the pulleys’ movements onto audio file playback position and volume.

What I’ve been trying to do at the beginning of this residency is get the software to a point where I feel it has adequate expressive capabilities so that I can begin to try out the different mappings — and do more drawings.

At the beginning of this week, I worked towards mapping the existing data from the pulleys onto several different parameters in the Reason software synthesis environment. As I wrote earlier, there just wasn’t enough variety in the sounds I was generating to hold my interest.

To find another way to move forward, I turned my attention to the data — and came up with a number of ways to derive dynamic values from my existing sensing system.

Properties of Lines

Instead of simply treating the coordinates I receive from the sensors as changes in position, I could begin to record data about the emerging line — and its relationship to previously drawn lines.

  • Current line length – since I know how far the pen has moved horizontally and vertically, I know the distance it has travelled.

brainstorm - line length

  • Running average of past line lengths – maybe it will be useful to know how the lengths of the lines have been changing over time.
  • Current line drawing time – it’s easy to figure out how long the pen was down. Once I know how long the pen has been down, I can also compute the speeds at which lines are being drawn. Maybe averaged data about these properties will also yield useful control values for generating audio.
  • Not drawing” time – how long did I pause between drawing lines. Is my drawing proceeding at a “furious” pace or am I stopping to think a bit between making marks?
  • Line slopes – I want some way to tell if I’m drawing similar lines. I want visual repetition to translate into sonic repetition.
  • Line shapes: Loops – Am I drawing over the same line in order to darken it on the paper? The software should be able to sense this. I don’t know of a good computational approach to this, but I had the idea to look at the current line (pen-down to pen-up) as video.

brainstorm - darken over time

If I render my digitized lines in a transparent color, they would darken over time and I could probably find a computer vision algorithm which tests for an enclosed shape (not sure if this is blob tracking or something else).

  • Line shapes: Similar forms – There are computer vision techniques for sensing the statistical similarity of an image to a reference image. I tried some experiments with this using cv.jit, but this didn’t yield useful results, yet.
posted by Michael at 11:26 am  

1 Comment »

  1. Nature of Motions

    Well this line of thinking is a bit simplisitc compaired to tracking shapes and you may have already incorperate all this.
    So you have speed and direction, and maybe rough absolute position from the pulleys right?

    So often if you were using a mouse or something you would want to turn the absolute position into speed because speed is generally more emotionally connected. Location gets all tied up with response to physical constraints.
    But extrapolate that to acceleration and it gets more interesting and dynamic.

    If you are needing pen down and up, maybe put the medium in a holder with a pressure sensor. the pressure might be interesting, it would follow mood and line weight, but might also give a leading indicator of line ending.
    maybe try a microphone in that holder rather than on the board.

    I did a little gesture recognition code that was shockingly simple. I just broke up the screen into a grid, 3×3 was enough for me which was what was surprising.
    i just numbered each region and kept a list of the last 7 locations. (the list was updated each time a region barrier was crossed, and on a no cross timeout.)
    i then saved lists from various gestures and used them as a library to compare too.
    It’s overly simplistic, but I did not have pen down or up like a block letter recognition system does.
    however in looking into block letter recognition, the useful simplifying principle i picked up was to first scale any line to a standard scale before looking for shape patterns.

    anyway that just occurred to me because just a total time value of how long in the last xx seconds the pen was in a region could make a nice data map, which would very coarsely relate to how dark that area was getting.

    Comment by Hal Eagar — May 7, 2009 @ 2:19 pm

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress