Recitation 10–documentation

name: Yiyu Yang   partner: Yudi Ao

Prof. Marcela

Section: Tu/Thur 11:15-12:30

 

Goal: to create a Processing sketch that controls media elements (images, videos and audio) using a physical controller made with Arduino.

Materials Used:a 4-30cm distance sensor, wires, Arduino Uno, Processing in PC.

For the physical controller, we use the distance sensor that we checked out from the equipment room. The circuit is exactly the one below but without the led. We did not use any led in our project for this time.

 

For the media you will manipulate, we choose to use live image and sound. Music is from Open Source Music: http://freemusicarchive.org/, and here is the credit for the song.

 

Our idea of the project, is to create a sketch on the processing, and by waving hands toward or outward the sensor, the sketch would be drawn out.

 

For the music part, when the hand is close to the sensor the music volume is low, when the hand is further it is going to be louder.

Here is the music file:

Our inspirations:

for the image part, we love the “smoke” image, like the free flow of air, so we thought about creating a flowing air on the sketch using Processing.

for the coding part, https://github.com/CodingTrain/website/tree/master/CodingChallenges is a reference to us where the creator has this Perlin Noise Flow Field #24, and the project is about how to create sketches by controlling the processing.

Finally here is how our project works:

 

Write a reflection about the ways technology was used in your project to control media. Now that you have made a media controller, think about in what other ways one can use physical computation to create interactive art and manipulate media, and incorporate this into your reflection.

Answer: Technology in our project is drawing a sketch on the screen of Processing, and the media is the music installed in Processing too. The Arduino distance sensor is one way to control the volume and the sketch by the user moving close/far to the sensor. From the readings, I understand what is basic interaction and the importance of detection of position. And in order to perform the interaction with media and technology there is gonna be an environment for it. Processing is one such environment, which, through an abundance of graphical capabilities, is extremely well-suited to the electronic arts and visual design communities. There are so many ways to use physical computation, say in entertainment and even in daily life health issue, for example, the heart rate watch is combining detection and movement along with the technology of analyzing your heart rate.

import processing.serial.*;
import processing.sound.*;

//music
Serial myPort;
SoundFile song;
int valueFromArduino;
float volume = 0.5;
float preValue = 100;

//perlin noise
float inc = 0.1;
int scl = 10;
float zoff = 0;
float newSpeed = 2;
int cols;
int rows;
int noOfPoints = 2500;
Particle[] particles = new Particle[noOfPoints];
PVector[] flowField;

void setup(){
  size(800, 600);
  background(255);
  
  //value from arduino
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[1], 9600);
  
  //music
  song = new SoundFile(this, "/Users/amy/Desktop/media_controller/song.wav");
  song.play();
  
  //perlin noise
  cols = floor(width/scl);
  rows = floor(height/scl);
  
  flowField = new PVector[(cols*rows)];
  
  for(int i = 0; i < noOfPoints; i++) {
    particles[i] = new Particle();
  }
}


void draw() {
  //value from arduino
  while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
  }
  println(valueFromArduino);
  
  //music
  if (valueFromArduino < preValue){
    volume -= 0.1;
  }else if (valueFromArduino > preValue){
    volume += 0.1;
  }
  if (volume > 1){
    volume = 1;
  }else if(volume < 0.1){
    volume = 0.1;
  }
  song.amp(volume);
  
  
  //perlin noise
  float yoff = 0;
  for(int y = 0; y < rows; y++) {
    float xoff = 0;
    for(int x = 0; x < cols; x++) {
      int index = (x + y * cols);
      float angle = noise(xoff, yoff, zoff) * TWO_PI * 2;
      PVector v = PVector.fromAngle(angle);
      v.setMag(1);
      flowField[index] = v;
      stroke(0, 50);   
      xoff += inc;
    }
    yoff += inc;
  }
  zoff += (inc/50);
  
  for(int i = 0; i < particles.length; i++) {
    particles[i].follow(flowField);
    particles[i].update();
    particles[i].edges();
    particles[i].show();
    particles[i].updateSpeed();
  }
  
  if (valueFromArduino < preValue){
    newSpeed -= 3;
  }else if (valueFromArduino > preValue){
    newSpeed += 3;
  }
  if (newSpeed > 12){
     newSpeed = 12;
  }else if(newSpeed < 1){
     newSpeed = 1;
  }
  
  preValue = valueFromArduino;

}





class Particle {
  PVector pos = new PVector(random(width),random(height));
  PVector vel = new PVector(0,0);
  PVector acc = new PVector(0,0);
  float maxSpeed = 2;
  float h = 0;
  PVector prevPos = pos.copy();

  void update() {
    vel.add(acc);
    vel.limit(maxSpeed);
    pos.add(vel);
    acc.mult(0);
  }
  
  void updateSpeed(){
    maxSpeed = newSpeed;
  }

  void follow(PVector[] vectors) {
    int x = floor(pos.x / scl);
    int y = floor(pos.y / scl);
    int index = (x-1) + ((y-1) * cols);
    
    index = index - 1;
    if(index > vectors.length || index < 0) {
      index = vectors.length - 1;
    }
    PVector force = vectors[index];
    applyForce(force);
  }

  void applyForce(PVector force) {
    acc.add(force);
  }

  void show() {
    stroke(0, 50);
    h += 1;
    if (h > 255){
      h = 0;
    }
    strokeWeight(1);
    point(pos.x, pos.y);
    updatePrev();
  }
  
  void updatePrev() {
    prevPos.x = pos.x;
    prevPos.y = pos.y;
  }

  void edges() {
    if (pos.x > width) {
      pos.x = 0;
      updatePrev();
    }
    if (pos.x < 0) {
      pos.x = width;
      updatePrev();
    }

    if (pos.y > height) {
      pos.y = 0;
      updatePrev();
    }
    if (pos.y < 0) {
      pos.y = height;
      updatePrev();
    }
  }
}

Recitation 10 Documentation (Antonius)

Partner: Viktorija

Concept

Viktorija had a video on her computer of her and our friend Greta dancing to a song underneath the Eiffel Tower replica in Tianducheng. We thought it would be cool if we could allow the user to adjust the tint and speed of the video using a potentiometer and a push button, and make it look more like a “rave” video.

Process

At first I worked on the circuit while Viktorija worked on the code. For the circuit, we referenced the cards in our Arduino kit to assemble the potentiometer, pushbutton, and 220r resistors into the breadboard. Here is the final circuit:

For the code, we referenced the Arduino to Processing codes from the serial communication workshop. The Arduino code was relatively simple, but we struggled a bit with the Processing code. Firstly, we kept getting a null expression error…Louis helped us figure out that this was because we forgot to input updateSerial(). Another issue was that we had forgotten to add m.read(), so the video did not play when we ran Processing (thanks Antonius). Here is the finished product:

If the user wants to make it look more like a “rave” they can do this:

Reflection

Technology was used to control media, because Arduino was used to talk to Processing, allowing the user to use the potentiometer and pushbutton to alter the tint and speed of a video.

The possibilities of physical computation to create interactive media and manipulate media are endless. There are certain limitations to the media we consume in that when we listen to a song, view a piece of art, or watch a video, we usually consume it as it is presented to us. Physical computation allows us to take this piece of media and alter it in the way we please. A large part of our media culture this day in age involves remixing pre-existing materials. Physical computation can simplify this process. For example, if you wanted to make a text remix of a certain passage of a book or existing poem, you could simply input the original text into an HTML file, and use Javascript to replace certain elements of the text to create your remix (thank you Roopa’s This is the Remix class). Once the initial code is there, you can easily replace the original text source material with other texts. Physical computation essentially simplifies these creative processes, especially with the present remixing culture.

//***ARDUINO CODE***

void setup() {
  Serial.begin(9600);
  pinMode(2, INPUT);
}

void loop() {
  int sensor1 = analogRead(A0)/4;
  int button = digitalRead(2);

  Serial.print(sensor1);
  Serial.print(","); 
  Serial.print(button);
  Serial.println(); 

  delay(100);
}


//***PROCESSING CODE***

import processing.serial.*;
import processing.video.*;

Movie myMovie;
int moviespeed;

String myString = null;
Serial myPort;

int NUM_OF_VALUES = 2;
int[] sensorValues;

void setup() {
  size(640, 360);
  myMovie = new Movie(this, "video.mp4");
  myMovie.play();
  setupSerial();
}

void draw() {
  updateSerial();
  printArray(sensorValues);
  image(myMovie, 0, 0, width, height);
  float moviespeed = map(sensorValues[0], 0, 1023, 0, 4);
  myMovie.speed(moviespeed);
  
  if (sensorValues[1] == 1) {
    tint(255, 0, 0);
  } else {
    tint(255);
  }
}

void movieEvent(Movie m) {
  m.read();
}

void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[2], 9600);
  myPort.clear();
  myString = myPort.readStringUntil(10);
  myString = null;
  
  sensorValues = new int[2];
}

void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil(10);
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == 2) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

3D Model and 3D Printing

This week we learnt about how to use tinkercad to create a 3D model. First of all, we find a reference from the websites which posted in IMA pages. It’s a lamp, so we decided to use that lamp as a model to set up a 3D model to print a lamp. There is a great challenge during that process, it’s hard for us to build the ring between each layer of the lamp because we have to place each cylinder by one own hand and the workload is huge. Luckily, we only need to do this for one time and then we can simple copy that one and then adjust the size to fit into different layers. It’s our first time to do the 3D modeling through tinkercad so it’s hard for us to know how to find the example or some simple steps but we fix this problem later. As the result, we did make a model of that lamp, it’s not as good as we imagined because the shape is a little bit wired but it’s a lamp we created by ourselves.

Recitation 09 – Joshua Jensen – Marcela Godoy

Process: I chose the Object Oriented Programming workshop because I felt that the next level would help me be able to better express whatever I chose to present in programming. As we went through the examples I felt I understood the concept until we were asked to create our own classes and objects. Then the difficulty set in. Not only trying to come up with an idea for a Class and object in a short amount of time, but finding the examples and integrating them into the code in a way that wouldn’t ruin the current examples we were building off of.

Examples: I chose to work from the eyes example found in the library presented by Luis. The idea of the example was to create objects that followed the tracking of the mouse and moved the pupil towards it. I chose to take those same balls and see if I could manipulate the gravity to make them float up from the bottom of the screen. To do this I used an example from the Processing website that demonstrated floating and interacting balls in an enclosed environment. I then tried to transition that in with the other code to create the effect I was looking for.

Results: In the end my code didn’t work, it merely showed a blank white screen with nothing even appearing. Unfortunately the class time ran out before I could figure out the solution.

Application: While I’m not certain that I’ll use Class Objects in my final project, the idea of Objects in the class setting allows the coding of the final project to look lots cleaner in the end, so I hope to be able to integrate it into my final code.

Class Object:

int numberOfObjects = 200;
float gravity = -100;
float friction = -0.9;

Object[] objects = new Object[numberOfObjects];

void setup(){
  fullScreen();
  frameRate(50);
  
  //initialize objects
  for (int i = 0; i < numberOfObjects; i++) {
    objects[i] = new Object(random(10,width), random(10, height), random(10,20), color(random(100,255), random(100,255), random(100,255)));
  }
 }
 
 void draw()
 {
   background(255);
   for (Object object : objects) {//(int i = 0; i < numberOfObjects; i++) {
    // object.collide();
     object.move();
     object.display();
   }
 }

Object:



//Object Class//
class Object {
 //PVector pos;  //pos for position of object
 float rad;    // rad for radius 
 color objColor;  // color of object
 float speed;    // speed of the object
 float x, y;
 float diameter;
 float vx = 0;
 float vy = 0;
 int id;
  // constructor
  Object (float xin, float yin, float din, int idin) {
    //pos = new PVector(posX_, posY_);
   // rad = rad_;
   // objColor = objColor_;
    speed = random(0,5);
  }
  
  void update(float mx, float my) {
    PVector mousePos = new PVector(mx, my);
  }
  
 //void collide() {
 //   for (int i = i + 1; i < numberOfObjects; i++) {
 //     float dx = others[i].x - x;
 //     float dy = others[i].y - y;
 //     float distance = sqrt(dx*dx + dy*dy);
 //     float minDist = others[i].diameter/2 + diameter/2;
 //     if (distance < minDist) { 
 //       float angle = atan2(dy, dx);
 //       float targetX = x + cos(angle) * minDist;
 //       float targetY = y + sin(angle) * minDist;
 //       float ax = (targetX - others[i].x) * spring;
 //       float ay = (targetY - others[i].y) * spring;
 //       vx -= ax;
 //       vy -= ay;
 //       others[i].vx += ax;
 //       others[i].vy += ay;
 //     }
 //   }   
 // }
  
   void move() {
    vy += gravity;
    x += vx;
    y += vy;
    if (x + diameter/2 > width) {
      x = width - diameter/2;
      vx *= friction; 
    }
    else if (x - diameter/2 < 0) {
      x = diameter/2;
      vx *= friction;
    }
    if (y + diameter/2 > height) {
      y = height - diameter/2;
      vy *= friction; 
    } 
    else if (y - diameter/2 < 0) {
      y = diameter/2;
      vy *= friction;
    }
  }
        
        
  
  void display(){
    pushMatrix();
    ellipseMode(CENTER);
    noStroke();
    fill(objColor);
    ellipse(x, y, diameter, diameter);
    ellipse(0,0, rad*2, rad*2);
    fill(0);
    popMatrix();
  }    
}

Week 10: Media Controller Recitation (Rudi)

For this weeks project my partner and I made a webcam image that would pixelate and clear up using a infrared distance monitor where the closer the user would place their hand or any object to the sensor, the clearer the pixels would be. If the user places something further away from the sensor the blocks on the webcam become far larger and therefore it pixelates more. The actual hooking up of the sensor was fairly easy as all that had to be done was connect the sensor to the arduino, send the values to the arduino serial monitor and then from there to processing where according to the distance from the sensor, the size of the blocks would alter from super small (which makes the picture clear) all the way up to where the picture is just made up of blocks.

IMG_2640

In this project we used a infrared distance sensor to control our media which is the webcam and one can often see this in contemporary art these days whether it is as simple as a button, facial recognition or any other form of technology which alters some aspect of art. For example I was once at an art fair and there was a wall of falling sand on a large display. The whole display was a touch screen so as the user would touch or draw something on the screen it would be repeated across the screen multiple times in different colours behind the sand which continued falling. It is amazing however, the transformation of technology being used in art as that is something which was non-existent prior to the twenty first century more or less.

Week12: Mold Making

After the 3D modeling exercise, I started to make the mold and to create a replica of the original 3D printed object.

The process of making the mold was a little bit complicated than I expected because of the large size of my 3D printed object, which was about 8cm * 8cm * 10cm. I used more material to make the mold and need extra strength to take the object out of the mold.

I stuck the head to the base and made other four sides of the container. Even though I used the hot glue to seal the gaps between the sides of the container, the silica still leaked after I pulled it into the container. Therefore, I have to use tapes to tape around the container to make sure the silica wouldn’t leak too much on the table.

After the silica got into shape, I broke the container and tried to take out the 3D printed object. It really took me so much strength.To take out the object, I cut the mold with a zig-zag pattern. I did this for 2 reasons: 1) it was easy to take out the original object; 2) it would be convenient to make a replica if the mold can open and close easily.

I had a problem when making the mold. Since my original object had a handle on it, to make sure its replica still had one, I had to be very careful when taking the object out of the mold. Marcela and I used much physical force to pull out the handle.

After that, it was just about making a replica. I mixed 8014A and 8014B together. Since my mold was large, I pulled the mixture into the mold twice.

It was like magic when seeing the reaction of the mixture: the liquid gradually turned into solid and the color of it changed from dark to white.

Following pictures are my replica. As you can see from the picture, the handle part was broken. It may because the part itself was fragile and I used too much force to take it out. Even though the final output was not that perfect, I had fun during the process especially when playing around with the chemicals 🙂

Recitation 10 – Media Controllers

Sydney Fontalvo

Professor: Marcela

Partner: Bella 

April 17, 2018

Materials:

1 * Arduino UNO

1 * 3-Axis Accelerometer 

1 * USB A to B Cable 

Jumper Cables

I was really stumped at the beginning of this recitation because I didn’t really know what to do since there was such a broad range of things that could’ve been done. Bella came up with which sensor we should use, so we decided to go from there and decide on a topic. We set up the circuit right away and it didn’t take long at all. 

We originally were going to use the x-axis on the accelerometer to make a picture on processing more opaque and more clear, while using the y-axis to make a sound file /music file speed up and slow down. We figured that we wouldn’t be able to do that in the class time we were allowed, so Bella came up with another idea. 

She thought about taking one of the examples that Marcela gave to us and connect it with the Arduino so we could control it using the accelerometer. I agreed so we got to work on the coding part of the recitation. 

We started with the Arduino part first. We took the sketches from the previous recitations when we both used the same sensor and tried to adjust those for this recitation by adding the serial communication part to it. Since we were using my computer to test everything, she sent me the code and we tried it out using the serial monitor. We checked everything and it worked pretty well. 

Next, was getting everything to work on Processing. Bella began by creating a new sketch and combining parts of Class 11’s example in Serial Communication of Arduino to Processing, Class 24’s ex08_movie_rotate, and her midterm project’s code into it. I realized sometime when she was working that there are going to be multiple values going into processing since the accelerometer has 3 axes instead of just one. I showed Bella my example code from the Serial Communication workshop earlier in the semester when we learned how to send multiple values between Arduino and Processing. 

She incorporated that into the code, and we tried testing it out. No matter what we fixed, we couldn’t get the code to work. We ended up asking Luis for help. He helped us with some mistakes and some things that could’ve made the code better, but it wasn’t working…..until we ACTUALLY moved the accelerometer. The whole time, we just needed to move the accelerometer and it would’ve worked hahaha. 

Final Product:

Credit to Luis for helping us and to Marcela for giving us the example in the first place. The final product came out pretty well for something that was done in such a short amount of time, but there were a couple small things that could be fixed if we had more time. We had to move the accelerometer a really good amount and really quickly in order for the picture to even come up on the screen. We could’ve fixed the point where the accelerometer started and how often the gif showed up on the screen when we moved the accelerometer. 

The technology was used in the forms of Arduino and Processing in the sense that they are things used to help make things faster, better, and easier to use. Like an example that we’ve seen in class before and in Computer Vision for Artist and Designers, the moving belts have a powerful meaning behind them. If I were to use physical components to create interactive art, I would do something regarding the political world and environmental issues in my piece to create a picture for someone. I would probably do something with plastic bags or recyclable things and go from there. 

// ARDUINO SKETCH //

const int xpin = A0;                  // x-axis of the accelerometer
const int ypin = A1;                  // y-axis
const int zpin = A2;                  // z-axis (only on 3-axis models)

void setup() {
  // initialize the serial communications:
  Serial.begin(9600);
  pinMode(A0, INPUT);
  pinMode(A1, INPUT);
  pinMode(A2, INPUT);
}

void loop() {
  // print the sensor values:
  Serial.print(analogRead(xpin));
  // print a tab between values:
  Serial.print(",");
  Serial.print(analogRead(ypin));
  // print a tab between values:
  Serial.print(",");
  Serial.print(analogRead(zpin));
  Serial.println();
  // delay before next reading:
  delay(100);

}


// PROCESSING SKETCH //

import processing.serial.*;
import processing.video.*;
Movie myMovie;
String myString = null;
int NUM_OF_VALUES=3;
int [] sensorValues;
Serial myPort; 
void setup() { 
  size(480,480);
  myMovie =new Movie (this, "dancing.mp4");
  myMovie.play();
  printArray(Serial.list());
   myPort = new Serial(this,Serial.list()[3], 9600); 
   sensorValues = new int[NUM_OF_VALUES];
} 

void draw() { 
  updateSerial();
    if (myMovie.available()) {
    myMovie.read();
  }
  pushMatrix();
 //printArray(sensorValues);
  translate(sensorValues[0]-220,sensorValues[1]);
rotate(radians(map(sensorValues[1],0,height,0,360)));
image(myMovie,0,0,sensorValues[2]/3,sensorValues[2]/3);
 popMatrix();
    //println(); 
   } 

   
   
 void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = 'n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Lab 10: Media Control (Sean)

Katie Pellegrino, Tyson Upshaw

Project: For this project, we modified a camera output to give the impression that the user is submerged in water if the moisture sensor reaches a certain level. If the moisture sensor reaches a certain level, the camera output turned to a blue shade and presented the image with circles to represent bubbles.

Circuit:

 

    

 

Method (Arduino): Our Arduino code was very simple, we basically just used the provided sample code for sending single values of information to Processing. It read the value from the Moisture Sensor, divided it by four to place it in the correct range, and then sent the value to processing.

Arduino Code:

 

 

Original View:

 

 

“Underwater” Effect:

 

 

Method (Processing): The processing code was a little more complicated (copied below). We saved each color (red, green, and blue) as a variable and created an if/else statement depending on the threshold level of the Moisture sensor. If the value passed the threshold level, we simply adjusted the intensity of the blue value and projected the pixels in an ellipse whereas, otherwise, the colors remained normal but were projected through rectangles.

Reflection: In this project, the camera technology was a powerful tool for controlling our project. Also, mapping the pixels into an array allowed us to arrange and adjust them exactly how we wanted.

Another way one can use physical computing to create interactive art is to develop an image tracker. Using the camera, you can set up the computer to recognize certain shapes or colors. Based on the position or intensity of these objects, you could make something happen. For example, you could develop something to track the basic shapes of a general face and when the face opens its mouth, you could make something happen (not too dissimilar from Snapchat functionalities).

import processing.serial.*;

import processing.video.*;


Serial myPort;
int valueFromArduino;

Capture cam; // SOURCE
int size = 20;

void setup() {
  size(1280, 720);
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[0], 9600);
  cam = new Capture(this, 1280, 720);
  cam.start();
}

void draw() {
  while (myPort.available() > 0) {
    valueFromArduino = myPort.read();
  }
  println(valueFromArduino);
  cam.loadPixels(); // SOURCE
  //  img.loadPixels(); // DESTINATION
  for (int y = 0; y < cam.height; y+= size) {
    for (int x = 0; x < cam.width; x += size) {
      int location = x + y * cam.width; //IMPORTANT FUNCTION

      float r = red(cam.pixels[location]);
      float g = green(cam.pixels[location]);
      float b = blue(cam.pixels[location]);

      if (valueFromArduino > 20) {
        fill(r, g, 255);
        ellipse(x, y, size, size);
        //while (y < height) {
        //  y ++;
        //}
      } else {
        fill(r, g, b);
        rect(x, y, size, size);
      }
    }
    cam.updatePixels();
  }
}

void captureEvent(Capture cam) {
  cam.read();
}

Ix Lab Recitation 10: Media Controller (Antonius)

Name of the Recitation: Media Controller “Photos Of Cecilia & Emily”

Professor: Professor Antonius

Partner: Cecilia Cai

 

GOALS FOR THIS RECITATION:

-Work in pairs in order to create a Processing sketch that controls media elements (images, videos and audio) using a physical controller made with Arduino..

-Document the work to my blog.

-Write a reflection on this theme.

-❤️

 

The Process of making a media controller

Cecilia and I decided to use potentiometers as physical values to create a media controller named “Photos of Cecilia & Emily” because we two are very good friends and we both love taking photos. We have literally tons of photos so we thought: “Why don’t we make something to display them in our own unique way?” So our idea was basically “a display window” for some of our favorite photos and it is combined with some audios.

 

  • How The Project Works:

There are two potentiometers on the breadboard and they can respectively manipulate the xPos and yPos of a happy little circle on processing. The color of that little circle keeps changing because we set the code “fill(random(255), random(255), random(255))” and it will moves according to the value sent from those two potentiometers. Though the boundaries are invisible, the canvas is actually divided into four different areas. When the circle moves into a certain area, for example, the right-up corner which is marker with “area1”, the background image will immediately be changed to “photo1” and “music1” will be played. Similarly, when the circle moves to different areas, the corresponding image will be displayed and the corresponding music will be played.

(music source: Apple iTunes)

 

  • How The Project Looks Like In The End:

The physical part looks like this (ignore the button because we first planned to use it but in the end we gave up):

 

Here is a video clip showing how it works:

 

  • Problems & Lessons

For the serial communication between Arduino and Processing, we mainly referred to the knowledge we learnt in last week’s workshop.

I am mainly responsible for the circuit (Arduino) part and the image effect in processing while Cecilia is mainly responsible for the sound effect.

For the image effect, the method we used was basically by using a big if-condition to manipulate different effects. The location of our little circle was “(xPos,Ypos)” so the statement of our if-condition was meant to divide the canvas into four parts. However, as I tried to put all the instructions directly under the if-condition, it turned out to not work at all. I didn’t really understand the reason behind but I decided to try another method. Instead of putting all the instructions like the change of image as well as sound, I can add another if condition below by setting a new variable. I then set a variable “z” whose value would be changed when different statement was met, and when it held different values, different effects would be generated under the control of a new if-condtion like this:

(ignore the audio’s name because we made them just for fun they are actually amazing music works downloaded from iTunes.

In this way, we separated the condition statements with the controlling effects and the image effect could finally work properly after we reorganized our codes like this. I don’t know whether this method really made a difference or there was just something wrong with my first version of codes, but this method can really makes my codes more organized and clear.

The biggest problem occurred in the sound effect controlling. There were several stages for our problem. The first was the background music. At the very beginning, we decided to use a background music. Cecilia’s idea was “to have a background music playing all the time [when the button is pressed], and have the point generate four different audio pieces as it moves”. However, when we tested this first attempt, we kept getting error reports of “Index out of array”. So we turned to one of the assisting fellows for help, who told us that the issue was each time the point was moved to a specific block, an audio piece would start to play, but as we didn’t set any instructions to let the previous music stop, all the sound files were actually playing together, which leads to a terrible catastrophe for the program– overloading. Moreover, the background music made things even worse as our original design was to play the background music when the button is pressed, and to pause when the button was pushed again. This was also something accounting for our problem as the background music would keep pausing and playing if the button is pressed, so the music would play intermittently, making the sound effect even more confusing. In order to solve this problem, we finally removed the background music to simplify the sound effect and added a “stop()” function to the sketch.

However, problems still existed  and the error message jumped out again when the circle moved to different blocks. We thought that the reason might be because the “stop()” instruction didn’t function effectively, as sometimes it was called when the music piece was not playing which might caused trouble for the operation of the code. We had tried several approaches to solving this problem and we even tried to download another library but all failed. In the end, we went to the lab and Professor Rudi came to our aid. What he suggested us to do was to set some Boolean variables, each variable was linked to one audio file and determined whether it was playing or not. When the audio is playing, the corresponding variable would appeared to be true, and when it stopped, it was false. So for each block, it basically looked like this:

Though it might seem to be a little complicated because it was super long and we had to repeat it for four times. However, it worked and this was my first time to use boolean, which was a new breakthrough for me and I really enjoyed this experience.

 

Reflection

For our project, the way how user can use the media controller to interact with computational art is relatively simple. The user simply switches those potentiometers to produce media arts. We basically use the values sent from potentiometers as an input data to control the media. When it comes to approaches to using physical computation to create interactive media art, I think they are indeed very diverse and there are just too many ways of doing it. Probably one of the simplest way is to use a simple move to create some kind of data and send those values to the computer in order to generate some art depending on those variables. For example, by switching the potentiometer or pushing some buttons. There are also some similar ones but seem to be more “advanced”–to detect and analyze human’s different moves, such as a gesture or even a simile, which means that there is no need for physical touches but instead, the user’s hands are “free” from the physical interface and merely had to “let the computer to see his/her moves”. I think this kind of interaction is even more natural and magical as the user seem to “have no connections” with the installation but could create an art with his/her interaction. I can’t imagine how cool it is if we can let a user to create a piece of art by only smiling–a piece of drawing whose lines and colors are controlled by the different ways of smiling, a song that is made of musical notes corresponding to the different expressions of his/her smiles and even a video of his/her smile with some interesting video editing. In fact, besides smiles and gestures, these visual input, sounds and other senses can all produce media art and they are often combined together. Just like the example of Messa di Voce, created by this article’s author in collaboration with Zachary Lieberma in this week’s reading, Computer Vision for Artist and Designers. This project  “uses whole-body vision-based interactions similar to Krueger’s, but combines them with speech analysis and situates them within a kind of projection-based augmented reality” which “uses a set of vision algorithms to track the locations of the performers’ heads [and] also analyzes the audio signals coming from the performers’ microphones”. In this way, users can use different senses to interact with one device to create his/her own art, which is very amazing and makes users feel like they have limitless possibilities to create art with the help of digital computer. They are making the art, they are interacting with the art, they are enjoying the art and they are also a critical part of the media art.

 

//Media Controller <<Arduino>>----------------------------------------------

void setup() {
  Serial.begin(9600);
  pinMode(2, INPUT);
}

void loop() {
  int sensor1 = analogRead(A0)/4;
  int sensor2 = analogRead(A1)/4;
  int button = digitalRead(2);

  Serial.print(sensor1);
  Serial.print(",");  // put comma between sensor values
  Serial.print(sensor2);
  Serial.print(",");
  Serial.print(button);
  Serial.println(); // add linefeed after sending the last sensor value

  // too fast communication might cause some latency in Processing
  // this delay resolves the issue.
  delay(100);
}







//Media Controller <<Processing>>----------------------------------------------



import processing.sound.*;
import processing.serial.*;

SoundFile bgm;
SoundFile zombie;
SoundFile sefs;
SoundFile park;
SoundFile space;

String myString = null;
Serial myPort;

int NUM_OF_VALUES = 3;   
// This is the number of values you are receiving. 
int[] sensorValues;      /** this array stores values from Arduino **/

int xPos = width/2; 
int yPos = height/2;
int z;

PImage img;
//PFont myFont;
PFont font;
boolean zombiePlaying, sefsPlaying, parkPlaying, spacePlaying;

void setup() {
  size(600, 600);
  img=loadImage("pink.jpg");
  img.resize(600, 600);

  setupSerial();
  z = 0;

  bgm = new SoundFile(this, "bgm.mp3");
  zombie = new SoundFile(this, "1.mp3");
  sefs = new SoundFile(this, "2.mp3");
  park = new SoundFile(this, "3.mp3");
  space = new SoundFile(this, "4.mp3");

  font = loadFont("Chalkduster-48.vlw");
}


void draw() {
  updateSerial();
  printArray(sensorValues);

  xPos = int(map(sensorValues[0], 0, 255, 0, width));
  yPos = int(map(sensorValues[1], 0, 255, 0, height));

  if (xPos > 0 && xPos < width/2 && yPos > 0 && yPos < height/2) {
    z=1;
  } else if (xPos > width/2 && xPos < width && yPos > 0 && yPos < height/2) {
    z=2;
  } else if (xPos > 0 && xPos < width/2 && yPos > height/2 && yPos < height) {
    z=3;
  } else if (xPos > width/2 && xPos < width && yPos > height/2 && yPos < height) {
    z=4;
  }


  if (z==1) {
    img=loadImage("1.jpg");
    img.resize(600, 600);
    if (!zombiePlaying) {   
      zombie.play();
      zombiePlaying = true;
    }
    if (sefsPlaying) {  
      sefs.stop();
      sefsPlaying = false;
    }
    if (parkPlaying) {  
      park.stop();
      parkPlaying = false;
    }
    if (spacePlaying) {  
      space.stop();
      spacePlaying = false;
    }
  } else if (z==2) {
    img=loadImage("2.jpg");
    img.resize(600, 600);
    if (!sefsPlaying) {  
      sefs.play();
      sefsPlaying = true;
    }
    if (zombiePlaying) { 
      zombie.stop();
      zombiePlaying = false;
    }
    if (parkPlaying) {  
      park.stop();
      parkPlaying = false;
    }
    if (spacePlaying) {
      space.stop();
      spacePlaying = false;
    }
  } else if (z==3) {
    img=loadImage("3.jpg");
    img.resize(600, 600);
    if (!parkPlaying) {
      park.play();
      parkPlaying = true;
    }
    if (zombiePlaying) {
      zombie.stop();
      zombiePlaying = false;
    }
    if (sefsPlaying) {
      sefs.stop();
      sefsPlaying = false;
    }
    if (spacePlaying) 
      space.stop();
    spacePlaying = false;
  }
}  
if (z==4) {
  img=loadImage("4.jpg");
  img.resize(600, 600);
  if (!spacePlaying) {
    space.play();
    spacePlaying = true;
  }
  if (zombiePlaying) {
    zombie.stop();
    zombiePlaying = false;
  }
  if (sefsPlaying) {
    sefs.stop();
    sefsPlaying = false;
  }
  if (parkPlaying) {
    park.stop();
    parkPlaying = false;
  }
}

image(img, 0, 0);


fill(255);
textFont(font, 55);
text("#PHOTO#", 100, 300);

strokeWeight(32);
stroke(random(0, 255), random(0, 255), random(0, 255));
point(xPos, yPos);
}



void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[ 3 ], 9600);

  myPort.clear();

  myString = myPort.readStringUntil( 10 );  // 10 = 'n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}



void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = 'n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Kinetic Interface : Final Project Concept (Amy)

How cool would it be to see your holographic self in a trapped cube?

For my final project, I will be using the pepper ghosting technique to create the holographic image, which connects to the Kinetic. This image would be encapsulated in a cube with the background removed. Skeleton tracking will also be included so the users are able to rotate the cube as they move their hand from left to right.

Because I want to create the illusion of being trapped in this cube, I want all particles to look like it’s inside the cube. As of right now, the image looks to be pushed to the border of the cube and not in the center. Within this cube, I will also be adding different effects to fill in the black area. This will include

  • Rainfall
  • Stars
  • Drawing Particles
  • More to come

The users will be able to change between the effects but putting their arms out in the air for a few seconds. The point is to make them throw their hands up in the air and twirl around until the next effect comes up.

The materials that will be used as of right now are :

  • Kinect with skeleton tracking
  • Glass Pane, maybe one or maybe four
  • Display screen
  • Possibly Leap Motion
  • Materials to make a stand if needed

What I have noticed so far is if you want a larger image of the person and you want the person to be in full frame, then it is best for Kinetic to be closer to the floor rather than eye level. If you leave the Kinetic higher up, the image in Processing would be smaller and you would also have to stand further away.