learning the ropes

things I made at ITP and after: sketches, prototypes, and other documentation

Wednesday, December 19, 2007

moneytone

On December 13th, I played moneytone as part of the NIME/Algorithmic Composition show at ExitArt. The 6 minute long composition was my final project for Algorithmic Composition, taught by R. Luke Dubois.

The composition was driven by financial transaction data from the past seven years of my life. I’ve been tracking my spending and earnings using software programs since 1998 and wanted to hear what this fairly large dataset (comprising 3143 days) could sound like.

moneytone patch performance

Instead of simply playing a recording of the piece, I chose to perform it live to see if it would be more engaging as a performance. I built some realtime interaction into the Max/Msp patch I was using to sonify the data so I could adjust the intensity of each of the 54 category frequency bands in the piece.

posted by Michael at 2:40 pm  

Sunday, November 11, 2007

Sonifying Gasoline Transactions

I continued my experiments towards one of my final project ideas: sonifying transactions. First, I exported all of my financial data (1999-January 2007) from Quicken into a tab-separated file and brought the file into OpenOffice Calc to tidy it up. After I extracted a selection of gasoline purchase from May 2001-May 2002, I realized there was something missing from the data. I’m interested in the rhythm of the purchases against the backdrop of the days and weeks. Since I didn’t purchase gasoline daily (thankfully), I needed to write a program that would insert the rest of the days into the transaction data. Doing this by hand seemed like a big pain — especially once I start working with the full dataset. To avoid having to make the algorithm aware of the number of days in each month, I simply generated a list of dates in OpenOffice Calc and compared it with the dates in my transaction list.

[ download code ]

After filling out my data, I started working in Processing and cSound to sonify the data. Starting simply, I used oscillators of different frequencies to represent days, weeks, and the transactions. The following Processing sketch is based on a sketch I wrote for my Google vs. Microsoft experiments. The classes were overkill for this application, but they were helpful for keeping the Google vs. Microsoft program legible.

[ download code ]

[ listen ]

posted by Michael at 6:32 pm  

Monday, October 29, 2007

Meditation: The Seven Bridges of Königsberg

The Task:The Seven Bridges of Königsberg, in addition to being a very famous problem in graph theory, can be thought of as a type of probability table for score creation. If we presume a musical vocabulary of four events (corresponding to the west island, the north bank, the east island, and the south bank), we can create a Markov process based on the possibilities of moving from one part of the map to another. For example, from the east island we have an equal chance of travelling to the west island or either bank; from the west island, however, we are twice as likely to travel to the north or south bank than we are to the east island (i.e. there are two bridges to each bank but only one bridge between the islands). Furthermore, we could restrict the motion in our score to include randomness without repetition (i.e. you can only cross bridges that you haven’t just crossed).

Using this problem as an inspiration, create a musical sketch based on four sounds representing the locales (the two islands and the two banks) and seven sounds representing the bridges. Construct a piece such that guides the listener on a walking tour through the city (which may or may not sound anything like a real city, or even a real space), attempting to solve the problem of the seven bridges. In other words, create a musical structure such that your path follows the topography of the city in such a way that you move in a semi-random path across the bridges, the only requirement being that you don’t double-back on yourself immediately.

You can generate the score by hand or through a computer algorithm like the one we did in class this week for defining Markov chains. Bring in what you came up with (both the score (paper or code) and the resulting sound) and we’ll check it out!

Starting Out
I couldn’t quite wrap my head around how to represent the bridges in code, so I started sketching. I numbered the bridges and assigned letters to each land mass. Once I drew the bridges and land masses, I was able to draw a simplified graph, just as Luke had done with Markov chains in class.

After redrawing the graph with numbers, I could see easily how to make a two-dimensional array out of the graph. I drew up a Markov table for the bridges and land masses and then set about coding, starting with Luke’s Markov code from several weeks ago (see below for one of the versions I worked on).

Seven Bridges Sketches

Once I had the code running, I started with a simple test to see what pattens the solution might reveal. I recorded the “name” of each bridge and land mass and used my code to generate a sound file of the “walk.”

The first two audio files I generated had different paths but the west island occurred in the same place in both. I generated two more to see if the west island always happened in the same spots. Although it frequently occurred in the same positions, it didn’t always happen. This makes sense, as the middle west island has the highest probability.

Knowing that the west island was the statistically most likely event to occur, I tried to structure a musical sketch around it by using the west island to play the tonic chord in a scale. I recorded other chords from the scale as the other land masses. For the bridges, I recorded short leading melody lines that I thought would smoothly lead between the chords. The results of that experiment weren’t particularly nice sounding, so I’m not including them here.

I started experimenting with pitches from a C minor 7th chord, using the Markov process from the Seven Bridges as an arpeggiator. I assigned the notes from the chord (C, E-flat, G, B-flat) to the land masses and other tones in the c-(melodic ?) scale as the bridges.

SB Pitches v1.mp3
My first attempt had some nasty clicks in it — and was much too slow.

SB Pitches v2.mp3

SB pitches against guitar samples v1.mp3
I mixed the generated arpeggiator against the guitar samples I recorded… and I liked the way it sounded. I was also thinking about what Brad Garton had said about using algorithmic composition at the score level — I constructed a chord progression of sorts with the guitar samples; they’re harmonically related. What could be interesting is to drive the “form” of the score using the same markov approach. essentially taking the different chords in the progression and generating the rapid bridge transitions (the arpeggios) to match the current chord structure… on the other hand, I like the way this drones on currently.

SB Pitches v3.mp3
more notes per second… but the clicking has increased. I tried adjust the ramp to a reasonable value, but it didn’t seem to work. I then reviewed what Luke did with the EEG data (rise, duration, decay as p3/3, p3, p3) and things sounded much better

SB tones+guitar v1.mp3
SB tones+guitar v2.mp3

The final two renderings contained Markov controlled sine wave tones and guitar samples. The sine waves play as before, but now I’ve added guitar samples that play on their own circuit of the Seven Bridges problem — but only as the land masses. There are no bridges. I was planning to add connecting musical phrases for the guitar, but found I liked the openness of the current sketches.

posted by Michael at 11:16 pm  

Thursday, October 25, 2007

Final Project Proposals

Two possibilities:
1. A performance or a sonification based on the Marov chains we derived from the Bridges of Konigsberg. I’m interesting in playing with maps of the NYC subway to derive similar trip-based sonifications.

2. A composition based on sonification of data from my financial transactions. I have almost 10 years worth of electronic transaction data that is usually only good for preparing tax returns. One particular question that jump out at me is how does the sound to buy gas each week throughout these years of data? Is there a good mapping that could represent this relentless consumption of fuel?

posted by Michael at 2:14 pm  

Thursday, October 11, 2007

Sonifying Datasets: Microsoft vs. Google

After we worked on sonifying EEG data in class, I wanted to try something on my own. I liked the “spectral” and ghostly qualities of the EEG sounds, but wondered what would happen if I tried to map the sounds of two competing sets of data — specifically stock prices. What would the Microsoft vs. Google race sound like?

I found stock quotes for the two companies at http://quotes.nasdaq.com and then massaged the downloaded data into a format I could easily read into a modified version of the Processing patch we were using in class. I made the mistake of trying to graph the data sets (containing over 2000 datapoints) using a spreadsheet program. It took forever.

What follows here are some of my notes as I worked through the process.

- Multiplied all values by 100 and rounded to get rid of decimal places
- sorted in ascending date order
- I want Google and Microsoft to duel, so I will insert blanks for dates that are missing from Google’s history.
- in ultraedit, I replace all CR/LFs with spaces so the existing program can easily read the data.

- I don’t really want to use different octave ranges for the two stocks; that might seem to give primacy to one over the other; perhaps I can change the type of base sound wave… one is a square, the other is a triangle
- what is going to happen when I scale the ranges? I can either take the total range or allow each dataset to use its own range

Initial Experiments
v1: [ listen ] google remains constant at a single pitch throughout the piece. Each dataset is using its own scale. I’m not sure I like the frequency range, either. I think I would like lower frequencies. First, I’ll try to get google out of the mud by choosing a different range…

v2: [ listen ] I changed the starting time so google doesn’t stay on 8.02 (D above middle C) the entire time. I needed to move later in the dataset since I only have 38 months of Google.

v3: [ listen ] Tried removing the drone from the times before Google starts rising, but I didn’t hear much difference

v4: [ listen ] made the scaling aware of both datasets. Now microsoft drones as google rises

v5: [ listen ] going for a longer dataset — nicer; I had the idea to do short snippets of this — making the composition out of corporate battles…. msft vs google; gm vs ford

v6: [ listen ] lower frequencies — and spread apart by a fifth (.07 in octave pitch class)

v7: [ listen ] changes the frequency relationship (spread out by .04) — I don’t like it

v8: [ listen ] lower frequency + a wider frequency spread (.11). I like the low, but the interval is weird. what would an octave and the original .07 feel like? I’d like to try different harmonics

v9: [ listen ] Interesting — but too short. I can only get 10 years of data right now. It might also be interesting to use daily market data — which could contain much more raw data for a 10 year period. The other alternative is to render fewer data points per second

Using More Data from the Dataset — Including Daily Stock Trading Volumes
I thought it might be interesting to control the intensity of the sound with the daily trading volume

My dataset looks like:
[round(closing price * 100)],[volume]
656,36140900
638,27227700
641,34314300
644,18598400
633,45610400
619,38191300
628,36933400
636,18076800
659,38905600

I’ll remove the CR/LF’s again and replace them with spaces so I’ll have:

656,36140900 638,27227700 641,34314300 644,18598400 633,45610400 619,38191300 628,36933400 636,18076800 659,38905600

After struggling for a bit to deal with the multiple sets of arrays, I decided to convert the program so it uses classes. This makes the data easier to access and the code more readable.

v10: [ listen ] don’t remember what I did here…

google vs msft v11: pushed the google and microsoft data through my revised code (now using volume information). I realized that frequencies were backwards, though. I want microsoft to be the low frequency.

v12: [ listen ]

Other Potential Experiments
- microsoft – left channel; google – right channel
- use the S&P500 or another index as the base level and then render the two competing stocks somewhere else in the frequency space

posted by Michael at 12:13 am  

Friday, September 28, 2007

Meditation: Lost in Translation

Exploring the idea that “translation can be seen as a process that, when used literally, can provide unexpected and compositionally interesting outcomes” — using translation as a generator. I used the Google Translator to do the translation.

I tried a couple of sessions with the translator and the following two examples are the funniest I produced.

When I lived in Texas many years ago, I could sometimes tune into a radio station from Mexico. At night, it played rock music in English. I heard “Hotel California” for the first time on that station. I always laughed when the DJ announced the songs, because he did it in “Tex-Mex” (or Spanglish as it is called here on the East Coast)…
“La voz de GunsandRoses”…
“La presencia de ACDC”…
The reason I mention this in the context of the above assignment is that during the day, the radio station played English-language pop music in Spanish. At the time, you could hear songs like “End of the Road” by Boys II Men, sung as “Vamos al fin de camino” (or something like that… it has been a really long time. A couple of weeks ago I heard a Beatles song sung in Spanish or Portuguese playing in the ITP lounge. It seems to me now that doing those translations well is probably a tricky operation.

So… this first attempt at the translation assignment involves part of a song that I first heard on the Reservoir Dogs soundtrack.

Stuck in the Middle with You
(Stealers Wheel)
English: Clowns to the left of me, jokers to the right
I am stuck in the middle with you.

Spanish: Payasos a la izquierda de mí, bromistas a la derecha
Me pegan en el centro con ti.

English: Clowns to the left of me, joking to the right
They beat to me in center with you.

Spanish: Payasos a la izquierda de mí, bromeando a la derecha
Batieron a mí en el centro con ti.

English: Clowns to the left of me, joking to the right
They beat in center to me with you.

German: Clowne auf der linken Seite ich, scherzend rechts
Sie schlugen in der Mitte zu mir mit dir.

English: Clowne on the left side I, joking right
They struck in the center to me with you.

German:
Clowne auf der linken Seite I, nach rechts scherzend
Sie schlugen in der Mitte zu mir mit dir an.

French:
Clowne page gauche I, vers la droite plaisantant
Ils ont attaché dans le milieu à moi avec te.

English:
Clowne left page I, towards the line joking
They attached in the medium to me with you.

Spanish:
Clowne dejó la página I, hacia la línea el bromear
Unieron en el medio a mí con ti.

English:
Clowne left page I, towards the line joking
They united in means to me with you.

German:
Clowne ließ Seite I, in Richtung zur Linie Scherzen
Sie vereinigten in den Mitteln zu mir mit dir.

French:
Le Clowne a laissé la page I, en direction à la ligne à des plaisanteries
Ils ont uni dans les moyens à moi avec te.

English:
Clowne left page I, in direction with the line with jokes
They linked in the means with me with you.

Perhaps you’ve encountered an instruction manual that didn’t quite read properly in English? Some of the phrases are pretty funny. I found an example in the manual of my Fostex Digital Multitracker VF-16.

Before Operating the Fostex Digital Multitracker VF-16
(originally written in Japanese)
Please read the overview before going any farther with your new recorder as it will save you a lot of time in the long run.

Spanish: Leer por favor la descripción antes de ir más lejos con tu registrador nuevo pues te ahorrará mucho tiempo a largo plazo.

English: To please read the description before going more far with your new recorder because it will save long time in the long term to you.

Spanish: Para satisfacer leer la descripción antes de ir más lejos con tu registrador nuevo porque ahorrará de largo plazo al largo plazo a ti.

English: In order to satisfy to read the description before going more far with your new recorder because it will save of long term to the long term you.

Italian: Per per soddisfare per leggere la descrizione prima di andare più lontano con il vostro nuovo registratore perché li conserverà del lungo termine al lungo termine.

English: For in order satisfying in order to read the description before going more far away with your new recorder because it will conserve them of along term to along term.

Italian: Per in ordine che soddisfa per leggere la descrizione prima di andare più faraway con il vostro nuovo registratore perché li conserverà lungo del termine lungo al termine.

English: For in order that satisfies in order to read the description before going more faraway with your new recorder because it will conserve them along of the long term to the term.

Spanish: Para para que satisfaga para leer la descripción antes de ir más lejano con tu registrador nuevo porque los conservará adelante del largo plazo al término.

English: For so that it satisfies to read the description before going furthermost with your new recorder because it will conserve them in front of the long term to the term.

posted by Michael at 4:01 pm  

Powered by WordPress