NIME Final Performance

For our NIME final concert, I have decided to carry on my idea about Chinese calligraphy after I have discussed with Antonius. For my last week`s performance, I writed the character “Dragon” with brush in combination with the beats of dragon dance with different rhythm. Therefore, this time,after several feedback and discussion with our classmates and professor, I have decided to write “crouching dragon, hidden tiger” with Chinese Calligraphy. My intention was to accompany with the music for the movie  and edit the part with beats changed from slow to fast. I thought it could be a perfect combination of Chinese calligraphy.

My original idea was to use the web camera to detect which part is black and change the music when it is it senses the dark area. However, the idea of web camera might be not stable and sensitive to the environment. It also might be influenced by the movement of my hands and brush. Therefore, I switched to the other idea– to use moisture sensor by the suggestion of Antonius. I borrowed four moisture sensors from the equipment room and located at each designed location of the cloth. I also use the cover of ink bottle to cover the Arduino and wires to eliminate the influence of LED lights.

WechatIMG15

A huge difficulty I encountered is the data I got from the moisture sensor. It is not very accurate each time I dip on the rice paper. Therefore, I had tested lots of time and making sure that the data was within the “safe” range of ink.

WechatIMG13

What`s more, I also borrowed a huge brush to add the dramatic element of Chinese calligraphy so that it creates the Chinese art atmosphere along with the music.

WechatIMG14

Another difficulties was the design of music along with the position of Chinese character. I did get lots of inspirations from Antonius. He suggested me to cut music into three parts. So the first horizontal stroke,  I started with the beginning of my music. Then, the music stops and I dipped the ink, giving the audience time to wait and react. I later started another vertical stroke, the music continued. The same for the third character. For the last one, I finished my performance with the one drum beat as I dipped on the last dot.

WechatIMG12 WechatIMG16

I indeed practiced lots of time and I am still nervous when I was in the show. However, NIME was a fresh new experience for me. I have benefited a lot from the stressful assignments and I felt more creative after each week`s challenge. It was a great course.

void setup() {
 Serial.begin(9600);
}

void loop() {
  int sensorValue1 = analogRead(A0);
  int sensorValue2 = analogRead(A1);
  int sensorValue3 = analogRead(A2);
  int sensorValue4 = analogRead(A3);
//Serial.println(sensorValue1);
if (analogRead(sensorValue1*2) >= 450) { 
   Serial.write(1);               // send 1 to Processing
   } else {                               // If the switch is not ON,
 Serial.write(0);               // send 0 to Processing
  }
  delay(50);  

 if (analogRead(sensorValue2*2) >= 290) { 
   Serial.write(3);               // send 1 to Processing
   } else {                               // If the switch is not ON,
 Serial.write(4);               // send 0 to Processing
  }
  delay(50); 
  if (analogRead(sensorValue3*2) >= 300) { 
   Serial.write(5);               // send 1 to Processing
   } else {                               // If the switch is not ON,
 Serial.write(6);               // send 0 to Processing
  }
  delay(50); 
  if (analogRead(sensorValue4*2) >= 300) { 
   Serial.write(7);               // send 1 to Processing
   } else {                               // If the switch is not ON,
 Serial.write(8);               // send 0 to Processing
  }
  delay(50); 
}

Rihanna NIME Instrument 3 || Kinsa Durst

For my instrument, I have created a program that detects noise, and changes the speed of a song playing (i.e. Rihanna’s “This is what you came for”), where the louder the amplification of the noise, the faster the speed of the song will play.

To make it more visual, I added a circle that would go up as the amplification goes higher, and become a darker color circle.

When the user speaks or sings into the mic, the circle will move up, and the song will play faster proportional to the amplification of the audio.

 

 

import processing.sound.*;
import ddf.minim.*;
import ddf.minim.spi.*; // for AudioRecordingStream
import ddf.minim.ugens.*;

Amplitude amp;
AudioIn in;
float a;
AudioPlayer player;
Minim minim;

TickRate rateControl;
FilePlayer filePlayer;
AudioOutput out;

void setup() {
  size(600, 800);
  background(255);
  
  minim = new Minim(this);
  //player = minim.loadFile("rihanna.mp3", 2048);
  //player.play();
  
  filePlayer = new FilePlayer( minim.loadFileStream("rihanna.mp3") );
  filePlayer.loop();
  rateControl = new TickRate(1.f);
  out = minim.getLineOut();
  filePlayer.patch(rateControl).patch(out);
    
  // Create an Input stream which is routed into the Amplitude analyzer
  amp = new Amplitude(this);
  in = new AudioIn(this, 0);
  in.start();
  amp.input(in);
  
}      

void draw() {
  background(255, 255, 255);
  //println(amp.analyze());
  
  a = amp.analyze();
  
  float rate = a + 1;
  rateControl.value.setLastValue(rate);
    
  a = a * 1000 * -1;
  fill(a + 300);
  ellipse(300, a + 800, 20, 20);
  
}

Final instrument for NIME

In this final project, I originally want to make a map as my instrument. However, the material I use looks much like a scroll, so I changes it to be a scroll in the end.

IMG_2593

Here is the draft for my final.

IMG_2600

I will make a big “brush” to trigger sounds. Different sounds play when my brush touches the scroll.

IMG_2591 IMG_2592 IMG_2596

The scroll is down there: I use the red cloth to fix the cotton so that the conductive thread can be used, instead of wires, on the side of the scroll.

IMG_2599IMG_2598

IMG_2601IMG_2602

For code part, processing get 3 signals from the arduino each time and play 3 sampled sounds.

 

IMG_2597

IMG_2594

 

 

import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.effects.*;
Minim minim;
AudioPlayer [] player = new AudioPlayer[3];
boolean [] on_off = new boolean[3];
boolean [] last_on_off = new boolean[3];

import processing.serial.*;
Serial myPort;
int[] serialInArray = new int[3];    // Where we'll put what we receive
int serialCount = 0;                 // A count of how many bytes we receive
boolean firstContact = false;   

int mod ;  // mod  =1 is ardunio, =2 is keyboard
int digital_1 = 0;
int digital_2 = 0;
int digital_3 = 0;






void setup(){  
  mod = 1;
  size(512, 200);
  
  minim = new Minim(this);
  player[0] = minim.loadFile("water sound.wav");
  player[1] = minim.loadFile("bamboo sound.wav");
  player[2] = minim.loadFile("walking.wav");
  
  if(mod == 1){
  // Print a list of the serial ports, for debugging purposes:
  printArray(Serial.list());
  String portName = Serial.list()[3];
  myPort = new Serial(this, portName, 9600);
  }


}

void draw() {
  if (mod == 2){
    keyboard_function();    
  }
  on_off[0] = boolean(digital_1);
  on_off[1] = boolean(digital_2);
  on_off[2] = boolean(digital_3);

  for (int i = 0; i < 3; i = i+1) {
    estimate(i);
  }

}





void serialEvent(Serial myPort) {
  int inByte = myPort.read();
  if (firstContact == false) {
    if (inByte == 'R') { 
      myPort.clear();          // clear the serial port buffer
      firstContact = true;     // you've had first contact from the microcontroller
      myPort.write('A');       // ask for more
    }
  } else {
    serialInArray[serialCount] = inByte;
    serialCount++;

    // If we have 3 bytes:
    if (serialCount > 2 ) {
      digital_1 = serialInArray[0];
      digital_2 = serialInArray[1];
      digital_3 = serialInArray[2];

      // print the values (for debugging purposes only):
      println(digital_1 + "t" + digital_2 + "t" + digital_3);

      // Send a capital A to request new sensor readings:
      myPort.write('A');
      // Reset serialCount:
      serialCount = 0;
    }
  }
}
void estimate(int i){
  if (on_off[i]){
    if (!last_on_off[i]){
      player[i].play();
        if (! player[i].isLooping()){
              player[i].loop();
            
        }      
     
      
    }
  }
  else{
      if (player[i].isPlaying() )
        {
          player[i].pause();
        }    
  }
last_on_off[i] = on_off[i];
println(str(i)+','+str(on_off[i]));

}
void keyboard_function(){
  if (keyPressed) {
    if (key == 'j' || key == 'J') {
      digital_1 = 1;
    }
    if (key == 'k' || key == 'K') {
      digital_2 = 1;
    }
    if (key == 'L' || key == 'l') {
      digital_3 = 1;
   
    }
  }
  
}
void keyReleased(){
  if (key == 'j' || key == 'J') {
      digital_1 = 0;
    }
    if (key == 'k' || key == 'K') {
      digital_2 = 0;
    }
    if (key == 'L' || key == 'l') {
      digital_3 = 0;
   
    }
  }

NIME – Zeyao – Final Project

For my NIME final project, I already stated what am I gonna do in the last documentation which is the final storyboard. I would use two leap motions with two laptops to create a sound story about a storm in the forest. When I coded my leap motion program, I faced some challenges and I will explain it in this documentation of my final project.

The most important part of Leap Motion is its gestures. So the first thing that I wanted to get done is to know how many gestures and sound that I need. I had “Tap”, “Swipe”, “Roll” and “Pitch”, which matched the “thunder”, “wind”, “rain” and “bird”sound.

In terms of “tap” function, I created current HandY value and previous HandY value, and set both the default value as 0. Then I create another value which is current value minus the previous one, so that I got the hand movement value. However, the problem is once my hand put on the Leap Motion, the sound will be triggered. To solve this problem, I set the range of trigger sound between 30 and 100. Also, I realized the sound will be triggered again and again. In order to make it only play once. I set a timer to let the sound won’t play until the timer counts to 0.

Screen Shot 2017-03-28 at 2.25.47 PM

In terms of the pitch function, I got the position of my thumb X,Y and my Index finger X and Y. Then I got the distance of these two fingers. After few tests, I set the trigger distance number as 20. Then I created two booleans, itsPitch and prevItsPitch, and let it both default as false. Once the distance is less than 20, itPitch will be true, or it will be false. Then once itsPitch is true, and prev is false, the sound will be played. Or it will be stoped. After the whole process, I set prev value equal to current one. So I can make sure it won’t be a one time thing.

Screen Shot 2017-03-28 at 7.59.39 PM

I used the same logic on wind and rain function, so I will not explain it again. However, there’s one difference. I applied the amplify function to both wind and rain. I set the default value as 1, then when it was triggered “stop”, the volume will decrease. Then it will stop.

Screen Shot 2017-03-28 at 8.16.21 PM

Screen Shot 2017-03-28 at 8.16.29 PM

According to the story board, I made a performance that let people understand it pretty clearly! The feedback that I have is that my gesture perfectly fit with the sound, and I feel really happy about it!

 

NIME Final – Wearable Imaginary Instrument + Live Looping

Maya Wang & Callum Amor

Inspiration and Concept:

Callum and I decided to perform together after realizing that our final project ideas for NIME were very similar in that we wanted to make a wearable instrument. We would both be building off of our previous projects, and to incorporate more of what we learned, we also decided to use Ableton to live loop the wearable instrument.

Interface:

Callum came up with the idea of playing an imaginary instrument, meaning a physical motion would trigger a certain sound, i.e. strumming = guitar. I thought hand gestures using conductive patches would be an interesting of creating the “switch.” (Later this hand gesture idea failed, but I will elaborate on that in the Creation section.) We decided that I would be the “performer”, wearing the instrument, and Callum would be the “producer”, live looping the sounds I played.

Song/Sounds:

To perform, of course we needed sounds to create a song. I left the composition to Callum, as he was very keen on composing an original song. It would also be easier to control how many sounds I would be able to play because of the limitations of the Arduino UNO, with only 12 pins. We first decided on 12 sounds, 3 drum, 3 bass, 3 piano, and 3 melody. The general idea was that these sounds would be played and looped to build up to the final song. (More of the specific musical inspiration is detailed in Callum’s documentation, so go check that out if you’d like a more accurate description of the music aspect.) (Again, this setup failed later on.) After having a solid idea down, we got to work in creating the physical instrument and the song.

Creation:

Gloves:

I began by creating the instrument, which was housed on a pair of gloves. We bought a pair of touch screen compatible gloves that had conductive patches on some of the fingertips, so that helped with the creation. I sewed conductive fabric to the fingertips, and then used conductive thread to connect those patches to wires on the ends of the gloves. The circuit schematic followed that of a normal pushbutton, with signal on the fingertips and power from the thumb, so that whenever a fingertip came in contact with the thumb, a different sound would play. (However, one of the reasons the gloves failed is because I left out a ground connection with resistors for each “button.”)

As for the Arduino and Processing codes, they were the same codes as Callum’s previous instrument, just with different sound samples. However, since it did not work with these gloves, I will not include them in this post.

Trial 1/Dress Rehearsal:

After uploading the code to the Arduino with gloves, and running the Processing sketch, we encountered many problems. The biggest problems involved the circuit on the gloves.  1. The circuit did not have a ground or resistors for each button, 2. The conductive fabric was a resistor and did not trigger when we wanted it to, 3. The conductive thread was not taut enough and touched whenever my hand moved, short circuiting the whole glove. Overall, the gloves were a bad design and was one of the main reasons our first trial did not work. The code also had many issues, but the physical component was more important and we could write a new code for the instrument when it was complete.

Creation Part 2:

Sweatshirt:

After the dress rehearsal for our final performance, we decided to change the glove switches to switches on a sweatshirt. Instead of using conductive fabric and thread, I decided it was best just to use wires on the inside, poke a hole in the sweatshirt, and expose a patch of metal on the outside so that there would be less resistance. Sacrificing comfort for accuracy was a smart compromise in the end. I made sure to include resistors and a ground on the circuit this time, as well as cutting down the inputs to 9 pins. The first three pins would be near the stomach to imitate a guitar, the next 3 on the chest for drums, and the last three on the arm to mimic a keyboard.

The overall appearance of the sweatshirt mattered, since we wanted the instruments to be an illusion. I soldered over the exposed wire circles to make them look like inconspicuous studs. As for the fingertip glove connected to power, I changed the conductive fabric to conductive tape to minimize the resistance (not pictured). At last the physical aspect of the instrument was complete.

Immense thanks to Antonius for helping us with the entire process, especially the codes. Since our original codes did not work in either Arduino or Processing, we had to create a completely new set of codes. The general function of the Arduino code sent values over to Processing. The function of Processing took those values and would load and play a song file accordingly.

Song:

With the new constraint of 9 pins came the challenge of making a decent-sounding song with only 9 sounds. Callum created a completely new song in Ableton, consisting of 3 drum, 3 guitar, and 3 bass. I edited the sound files so there would be as little delay as possible.

Final Sounds Used:

Trial 2/Practice:

Now, it came time to practice. Of course, nothing works perfectly the first time around, so naturally, we ran into many problems.

  1. The constraints of serial communication created a delay when I played the sound on the sweatshirt, so I had to work with the delay when playing the instrument. This made for a very unnatural looking performance, but I worked to make it as smooth as possible.
  2. While practicing, some sounds started triggering randomly, so we took it up to Antonius to troubleshoot. We began by covering every exposed wire with electrical tape, but the problem still persisted. After a while, we realized that the random sounds corresponded to the yellow wires only. Since it would be incredibly troublesome to replace all the yellow wires, we just decided to cut out all of those sounds, reducing our sound count to 6. Compromising one of each sound (guitar, bass, drum), we were still able to make a song.

3. This problem was related to the delay, but getting the performative aspect (script, timing, acting) correct was also a large hurdle. As a result of the delay, the drum sounds were very hard to control. Callum controlled the live loops on Ableton, but cutting and editing sound samples while performing was very difficult. Despite practicing for many hours, the final performance was still not perfect.

Final Performance:

Finally, it came time to present our instrument and perform live in front of an audience. Even with our careful preparation and positive attitudes, some things still went awry. We enthusiastically performed our pre-performance skit, which was very fun and engaging. But when I played that first bass drum note, I realized the sound was very off. It was way too loud, and extremely crunchy, which made it barely distinguishable from the snare drum sound. From there stemmed the problem of not being able to hear myself play, as well as making Callum unable to hear the distinct sounds. I tried to act as if nothing was wrong but in reality I frantically tried to adjust the knobs to somehow salvage the sounds. My laptop even went into sleep mode halfway through the performance, but thankfully Callum’s live loop continued to play while I pretended to play the instrument and fix the problem. In the end, I got my computer to work again, and we ended the song with the decrescendo of the song and a final “unplugging” sound. Throughout the whole performance we tried our best not to break character, since “the show must go on.”

In the end, our performance was not perfect, but it was the best we could given the mass of problems we encountered through all stages of the creation and practice process. I am very proud of Callum and myself for going on stage and performing with an instrument completely created from scratch and with the knowledge we had.

Thank you Antonius, not only for your tremendous assistance on this project and many others, but for this creative, welcoming, and incredibly challenging IMA class. I was able to realize my passion for experimental interfaces through your patient instruction and engaging creativity. I had a blast learning something new every week for the past 7 weeks. The challenging material and open interpretation of music really spurred my productivity and creativity. By teaching a wide range of material and pushing my creativity, you helped me create many things I would have never dreamed of before this class. The challenging workload, nerve-inducing performances each week, and the sheer amount of new knowledge really helped me grow as an IMA major as well as a performer.

//Arduino Code
int buttons [9];
int states[9];

void setup() {
  for (int i = 0; i < 9; i++) {
    buttons[i] = i + 2; // so button 0 will be 2, etc.
    states[i] = 0; // make default state 0
    pinMode(buttons[i], INPUT);
  }
  Serial.begin(57600);                    // Start serial communication at 57600 bps
}

void loop() {
  // instead of bytes
  // we are sending ascii
  for (int i = 0; i < 9; i++) {
    states[i] = digitalRead(buttons[i]); // read the buttons for each one
    delay(1);
    Serial.print(states[i]); // print all of them in ascii
    Serial.print(","); // separate by commas to make a comma separated value
  }
  Serial.println("0"); // one last println to make a new line. with one last zero to not end with comma
  delay(10);
}

//Processing Code
import processing.serial.*;
import ddf.minim.*;

Serial myPort; 
int[] val;     

Minim minim;
AudioPlayer[] song;

boolean[] songPlaying; //array of songs playing or not

void setup() 
{
  size(200, 200);
  song = new AudioPlayer[10];
  minim = new Minim(this);
  myPort = new Serial(this, "COM6", 57600);

  val = new int[10];
  // initalize values
  for (int i = 0; i < 10; i++) {
    val[i]=0;
  }
  song[0] = minim.loadFile("guitar1.wav");
  song[1] = minim.loadFile("guitar2.wav");
  song[2] = minim.loadFile("chord3.wav");
  song[3] = minim.loadFile("drum1.wav");
  song[4] = minim.loadFile("drum3.wav");
  song[5] = minim.loadFile("drum2.wav");
  song[6] = minim.loadFile("key1.wav");
  song[7] = minim.loadFile("key2.wav");
  song[8] = minim.loadFile("key3.wav");

  songPlaying = new boolean[10];
  for (int i = 0; i < 10; i++) {
    songPlaying[i] = false;
  }
}

void draw() {
  if ( myPort.available() > 0) { // If data is available,
    //val = myPort.read();         // read it and store it in val
    // this assumes as bytes...
    // if it's not null then
    String inString = myPort.readStringUntil('n'); //read string until new line make it into a String
    if (inString != null) {
      // println(inString); // print it
      // and convert the sections into integers:
      int sensors[] = int(split(inString, ','));
      //      println(sensors.length);
      // match sensors with values
      if (sensors.length == 10) {
        for (int i = 0; i < 10; i++) { // print them one by one with commas
          val[i]=sensors[i]; // match values with sensor
        }
      }
    }


    for (int i = 0; i < 10; i++) { // print them one by one with commas
      print(val[i]);
      print(",");
    }
    println();
  }
  for (int i = 0; i < 10; i++) {
    if (i!=2) {
      if (i!=5) {
        if (i!=8)
        if (val[i]==1) {
          if (songPlaying[i] == false) {
            song[i].rewind();
            song[i].play();
          }
          songPlaying[i] = true;
        } else {
          songPlaying[i] = false;
        }
      }
    }
  }
}

NIME Final Project Range Detection Noise Maker

I aimed to use RPLIDAR, which is a hardware that spins 360 degrees that shoots infrared lasers, which is used to detect objects around it and their distance. My aim was to create an instrument using this, where each angle would represent a note and the distance would represent the strength of the note.

I decided to use python to code this project because it allowed easiest method of communicating with the RPLIDAR using python’s Serial library to send and receive through serial port that connects to the RPLIDAR. My program would tell the RPLIDAR to start spinning, then start scanning and return the data it retrieves. I would collect these data, and using another python library, called pyaudio, I would correlate angle input to the frequency of a note, and the distance to amplification of the note.

However, after this was coded, I realized that the notes do not play in parallel, but instead they queue up and wait for other notes to finish before playing. This caused many bugs, as the RPLIDAR would send hundreds of angles with distance per second, and this was too many notes for the program to handle. The notes would respond too slow or not play at all. Therefore, I decided to only use small angles of the 360 degrees. I decided to use two sides of the RPLIDAR, i.e. angles 350-355 and 170-175. I made the frequency of the notes on each side correlate with the distance it detected, so the further away the object, the lower the frequency. For the side 170-175, I increased the frequency so that the instrument can produce more varied sound.

'''Displays measurements'''
from rplidar import RPLidar
import math
import pyaudio
import sys
from decimal import Decimal
import re

PyAudio = pyaudio.PyAudio

BITRATE = 260000 #frames per second
FREQUENCY = 1500 #waves per second, 261.63 = C4 note
LENGTH = .04 #seconds to play

p = PyAudio()
stream = p.open(format = p.get_format_from_width(1), 
            channels = 1, 
            rate = BITRATE, 
            output = True)


PORT_NAME = '.com3'

def run():
    '''Main function'''
    lidar = RPLidar(PORT_NAME)
    data = []
    # outfile = open(path, 'w') 
    try:
        print('Press Crl+C to stop.')
        for measurment in lidar.iter_measurments():
            line = 't'.join(str(v) for v in measurment)
            #print(line + 'n')
            line = line.split('t');
            angle = int(round(float(line[2])))
            dist = int(round(float((line[3]))))
            
            if angle < 350 and angle > 180:
                continue
            if angle > 351:
                continue
            if angle < 175:
                continue
            
            if dist < 1:
                continue
            if dist > 1499:
                continue
            LENGTH = .05
            FREQUENCY = 1500
            if angle < 350:
                FREQUENCY = 4500
            #if dist > 5 and dist< 100:
            #    LENGTH = .02
            #if dist > 100 and dist < 500:
            #    LENGTH = .03
            #if dist > 500 and dist < 1500:
            #    LENGTH = .05
                            
            #print('angle: ', angle)
            #print('distance: ', dist)

            
            
            #FREQUENCY = angle + 500
            
            #if dist != 0:
            FREQUENCY -= dist
            #FREQUENCY += dist
                
            #print('frequency: ', FREQUENCY)

                
            NUMBEROFFRAMES = int(BITRATE * LENGTH)
            #print('numberofframes:', NUMBEROFFRAMES)
            RESTFRAMES = NUMBEROFFRAMES % BITRATE 
            WAVEDATA = ''
            #print('restframes:', RESTFRAMES)

            for x in range(NUMBEROFFRAMES):
                WAVEDATA = WAVEDATA+chr(int(math.sin(x/((BITRATE/FREQUENCY)/math.pi))*127+128))

            for x in range(RESTFRAMES): 
                WAVEDATA = WAVEDATA+chr(128)


            stream.write(WAVEDATA)
        lidar.reset()
    
    except KeyboardInterrupt:
        print('Stopping.')
    stream.stop_stream()
    stream.close()
    p.terminate()
    lidar.stop()
    lidar.disconnect()

if __name__ == '__main__':
    run()

Week7: Final Project “Five Hundred Miles (Remix)” by Tian (Antonius)

Team Member: Gao Yang & Tian

For the final project Gao Yang and I created an automatic harmonizing device with Max SP and gave the performance Five Hundred Miles (Remix). The story is about a person who leave home, struggle for a life away from home and can’t get back home without any achievement.IMG_0474

(Final product of laser cutting)

On the first stage we had a lot of ideas and gave up a lot. We wanted include the harmonizing technique in Max, a guitar and a cardboard which told story. But we found out it was too much and hard to combine. After the rehearsal on Week 6, we decided to leave out the cardboard and instead give a live show with real people. In a later rehearsal Antonius also told us to leave out the guitar since it was too much and distracting. Final, we focused on the harmonizing and our live performance.

guitarIMG_0474

(Abandon Max patcher for guitar recording and abandon laser cut products for the cardboard)

Scott helped us a lot with Max. Under his help we learned how to use different functions such as sfplay, gain, comment, scope, gizmo etc., among which gizmo was the function that realize our real time harmonizing. We found there was delay both created by the soundcard and the gizmo function, but it turned out not influencing too much our performance.

max

Since Gao Yang and I wanted to harmonize our voices differently, the strategy was to separate our voices by left and right input channels through the mixer and then manipulate them differently in Max. We tested different chords to get the best harmonizing sound. Finally we have two original voices and five harmornizing voices in total.

IMG_0469IMG_0467

In order to make our performance more vivid, we also got sample sounds of train horn, trip noise and recorded two whistle sounds which was mentioned in the lyrics. We decided to include road signs as our stage prop to make the show more rich.

IMG_0473IMG_0472

We edited the song a lot so it had ups and downs and could work well with all our technology and performance. We spent a lot of our time on rehearsal since I needed to get along with the control board in Max and Gao Yang needed to perform as a character. I was satisfied with our final performance. I was really glad that I got the chance to combine together my interest in vocal performance and the new technology I learned.

IMG_0470 IMG_0468

NIME was a really intensive course and I learned a ton in such a short time. Thanks Antonius, Scott, our five volunteers and all who helped with our project!

The final performance can be found here: (starts from about 1:00:00)

https://www.facebook.com/nyushima/videos/1453991354673058/

 

NIME Final Instrument

For my final instrument,  I built off my first instrument. I decided to use my broken ukulele to use as the body. For my performance, I was going to pretend to eat my ukulele. So I used Makey Makey and conductive tape to turn my broken ukulele into something I’d eventually put my saliva all over. Makey Makey actually made things very easy. Technically, this instrument wasn’t hard to make. I first had to decide which parts of my ukulele I was going to chew, bite, and/or lick to trigger noises. I used conductive copper tape to connect wires and alligator clips to the Makey Makey. I then used this simple code that was basically a keyboard piano. When I hooked up Makey Makey, my computer read a touch at the conductive area as the key or arrow it was hooked up to on the Makey Makey. Because the Makey Makey board only has a certain number of pins for keys, I needed to use the arrows, which required simple modifications to the code.

import ddf.minim.analysis.*;
import ddf.minim.*;
import ddf.minim.signals.*;

Minim minim;
AudioOutput out;

void setup()
{
size(512, 200, P3D);

minim = new Minim(this);
out = minim.getLineOut(Minim.STEREO);
out.setVolume(1.0);
}

void draw()
{
background(0);
stroke(255);
for(int i = 0; i < out.bufferSize() – 1; i++)
{
float x1 = map(i, 0, out.bufferSize(), 0, width);
float x2 = map(i+1, 0, out.bufferSize(), 0, width);
line(x1, 50 + out.left.get(i)*50, x2, 50 + out.left.get(i+1)*50);
line(x1, 150 + out.right.get(i)*50, x2, 150 + out.right.get(i+1)*50);
}
}

void keyPressed()
{
SineWave mySine;
MyNote newNote;

float pitch = 0;
switch(key) {
case ‘w’: pitch = 262; break;
case ‘a’: pitch = 277; break;
case ‘s’: pitch = 294; break;
case ‘d’: pitch = 311; break;
case ‘f’: pitch = 330; break;
case ‘v’: pitch = 349; break;
case ‘g’: pitch = 370; break;
case ‘b’: pitch = 392; break;
case ‘h’: pitch = 415; break;
case ‘n’: pitch = 440; break;
case ‘j’: pitch = 466; break;
case ‘m’: pitch = 494; break;
case ‘,’: pitch = 523; break;
}

switch(keyCode) {
case LEFT: pitch = 554; break;
case RIGHT: pitch = 587; break;
case UP: pitch = 622; break;
case DOWN: pitch = 659; break;
}

if (pitch > 0) {
newNote = new MyNote(pitch, 0.2);
}
}

void stop()
{
out.close();
minim.stop();

super.stop();
}

class MyNote implements AudioSignal
{
private float freq;
private float level;
private float alph;
private SineWave sine;

MyNote(float pitch, float amplitude)
{
freq = pitch;
level = amplitude;
sine = new SineWave(freq, level, out.sampleRate());
alph = 0.9;
out.addSignal(this);
}

void updateLevel()
{
level = level * alph;
sine.setAmp(level);

if (level < 0.01) {
out.removeSignal(this);
}
}

void generate(float [] samp)
{
sine.generate(samp);
updateLevel();
}

void generate(float [] sampL, float [] sampR)
{
sine.generate(sampL, sampR);
updateLevel();
}

}

I slightly modified pitches to my liking.

The hardest part honestly was figuring out how to arrange and display the wires. The original was super messy and clunky. So I ended up braiding them, but that took some experimenting too, as I later changed the conductive copper tape to fancy new conductive fabric tape that was super amazing. I also had this piece of the ukulele neck I wanted to bite off for part of the performance. I was very unsure of whether I should braid it in with the other wires. It took a lot of experimenting with different braiding and designs of wiring, but eventually I figured it out. I ended up taping the Makey Makey to the back of the board instead of the bottom, because it ended up working better. I also ended up soldering the wires to a header so that they would stay in better. IMG_1423 IMG_1430

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

IMG_1429 IMG_1428

 

 

And here is a video of me making poor Rudy try my pre-spit instrument for the sake of documentation, though he played in more conventional way than I did.

For my performance, I basically bit off a piece and had an audience member I had  chosen beforehand to hold the instrument while I licked it. What I didn’t tell her is that I was going to leave her there with my instrument once I was done. Her expression was quite priceless. Overall, this was a super fun project and the performance was not nearly as scary as I thought it would be.

Week 7: Final Instrument, Magical Water

Student Name: Jianghao HU (Sam).

I had this idea of working with water early when I was working on my last instrument, A Bite of Music. Carrying this idea, I started thinking about what kind of water trigger I was going to use for the expression. One original idea I had was using different type of liquid to trigger different sounds, but I abandoned it due to some technical limitations. I then was inspired by Antonius and started to think about what does water mean to me. Perhaps I haven’t found out the answer yet, but during the process of thinking I noticed there were incredibly tons of different conditions that water exist in. This thought shaped my final instrument, which was to explore water in different conditions and trigger corresponding sounds.

As for the technology side, I used a FSR sensor to detect the weight of water and a moisture sensor to detect the level of water. And then I send those signals to Processing to trigger sounds of instruments. Those are merely simple tricks. Here’s the picture and a short demo of the earliest prototype.

IMG_7373 2

waterdemo

I then moved on to thinking about the performative aspect. I eventually had several cup sets. The first set was merely a cup filled with drinking water. I attached the moisture sensor to a straw and drink water with the straw, so that the sound can change as I drink water. The send set is a cup filled with wet tissues. I intended to demonstrate the hidden water in objects, but couldn’t think of  a more natural way to perform with it than stick the straw into the cup. My third set has two cups, one tall on box, the other down on the table. I pieced a tiny hole on the cup and transferred water from one cup to another. At the same time I used two sensors (moisture and FSR) to detect the change of two cups, so that the sounds can demonstrate this flowing water.

And here’s a rehearsal of my final performance:

magicalwater

NIME – Final Performance – The Choir

Ideation:
My idea changed several times during the very first week of final performance preparation. I wanted to create a ukulele on a rug decorated as the fingerboard of ukulele and have sensors under the rug so that one can trigger sounds by stepping on sensors as if the person was really playing a ukulele. Antonius then suggested I use Kinect for this project. But later in class I learnt that Kinect is difficult to use and I cannot really count on its accuracy of detecting my position in a grid. Then I changed my idea and decided to create an instrument based on interactions among people. I stand in the center and touch my performers surrounding me to generate different sounds. After consideration I finalized my idea — a skin-to-skin choir.

Production:
I implemented my idea with Makey Makey. Makey makey board is connected to my laptop, and wires go from makey makey pins to every performer. I, as the conductor of the choir, connected to GND. I have to say that Makey makey is very friendly to beginners and easy to get going.

IMG_0763
(Playing around with makey makey, a very simple graphite piano)

Instead of Processing, I used keyboard piano in GarageBand and chose Chamber Choir as timbre.

Screen Shot 2017-03-26 at 19.53.31

I also redid key mapping of the Makey makey board to adapt it to keyboard piano in garageBand.

2017-03-26 16.11.54 (Makey makey key mapping reference)

I had six performers, and six long wires (about 3 meters long) were really hard to organize. In order to make my project less messy, I used two strands of three twisted wires. I tried to tape the wire on my wrist, but later found it not very flexible and kind of constrained my movement, so I bent the wire into a ring and it seemed to work better.

IMG_0766 Screen Shot 2017-03-26 at 20.09.38 FullSizeRender 7

I feel that rehearsals were really important for my final performance and helped a lot in improving my instrument.

During the first few rehearsals I figured out how the wires should be set up on the ground. IMG_0789

At the same time some bugs popped up. For example, Miki was one of my performers. She could trigger the sound herself without touching me. We fixed this problem temporarily with a plastic box.FullSizeRender 8

Later we found that it happened to other performers sometimes as well. To solve it completely, I untwisted the wires more and organized them more neatly. After several trials, this problem was solved.FullSizeRender 9

To ensure that the circuit won’t go wrong and get messy, I added a breadboard and put it together with Makey makey into a small box.

IMG_0806

IMG_0808

Demo: