NIME Assignment 3

In this week, we were asked to combine physical components with max to create a music piece that can be controlled via sensors. Continuing my concept of deconstructing classical music, I created a piece of chaos. I used a potentiometer and ultrasonic sensors to control the volumes of different samples. The piece was meant to express the modern deconstruction and reconstruction of music and redefinition of music and noise.

NIME-Week2 Max instrument

Name: Frederick Qian

Assignment: To create a piece of music using Max

Experience: To create such a piece, I first came up with the idea to combine the classical music and modernize it, which in this case is an excerpt from Mozart’s Magic Flute: the Night Queen Aria. I added a couple soundtracks to Max and tried to use these to modernize the Aria, but it turned out to be a little messy. Surprisingly, I found it quite intriguing to destroy and deconstruct the Aria, so I used keys to pick random soundtracks and drum rolls to destroy the original song and created a beauty of distortion.

NIME-Week 1 contact mic instrument

Name: Frederick

Assignment: Using a contact mic to make a musical instrument.

Experience: In class, we each made a contact mic under instructions. After finding out that the contact mic can amplify small sounds. First, I thought using several pieces of paper can create a wonderful illusion of drum with unique texture, so I put a piece of paper on top of a cup and uses it as a drum. Then it occurred to me that with a cup, I could amplify the sound of water, which was always one of my favorite sounds. So I poured water inside. Surprisingly, it reminded me of the sound of birth, so I based my instrument on the concept of birth. For the performance, I first start drumming on the paper, along with shaking the cup with a little amount of water inside, to mimic a chaos happening inside the womb before birth. Then I poke the paper, to mimic the scenario when the water breaks. Finally, I pour water into the cup, suggesting the birth.


Final Project: Under the Sea (with Ian Zhang)

Final Project: Under the Sea

Partner: Ian Zhang

Instructor: Rudi


“Under the Sea” is an interactive art installation aiming at creating an interactive and immersive experience with a storyline. It brings physical interaction into a movie/theater scenario and allows the audience to become the actors/actresses. Also bringing in a projector and physical components enables the audience to have a more authentic experience.


After seeing the dynamic light matrix in class, Ian and I are inspired by the interaction between the light and users and the cool effect it turns out to be. The light matrix has so many different expressions, it looks almost like as if it has emotions like humans do. The changes of the light matrix are both futuristic and aesthetically appealing. And we are also inspired by the ways humans can interact with objects, not only hands on a surface but also movements and gestures. Interaction now and how people picture it would be in the future are all limited to a surface or an interface, but humans can do so much more than just moving around fingers. The future mode of interaction should have more possibilities other than merely using a “cooler” interface. In intention of improving and combing ways of interactions to make a more immersive and jaw-dropping experience with a good storyline.


To find an apporpriate scenario to suit the light matrix, we first think of the theatre, because the theatre is where requires intense lighting system. So I come to the idea of making a “one-man show”, in which one can control almost everything applied in theatrical use, including lights, music, backgrounds, physical components etc. In developing this idea, we realize that this requires wireless communication technology, which we haven’t touched upon and have little knowledge in. Then, we decide to modify this idea because of technological barriers. Taking real life technology into consideration, I then develop the idea to create an immersive and involving experience program and ditch the light matrix. By saying this, we want people who watch movies usually without a sense of involvement to actually participate in the movies or scenes. We want everyone to be a part of the scene. Also, learning that there are sensors that can detect people’s movements within a certain range convinces us to produce this project.

sketch 1


sketch 2

After confirming what we are going to do, we start to write the code and make decisions on the sensors and physical components. Told by Rudi that there is a camera called Kinect that can accurately detect joints movement, we decide to base our project on this sensor. Right after the decision, we start to code with the help of the website Openprocessing and IMA fellows.

Using codes from the Internet, we change the code a little bit to suit our needs. First, after figuring out several indexes, we change the color, size of the sun rising 3D animation and eliminate the grid.

And then, we change the size and the position of the sun so that it can look like approaching the sun.Then we add the under the sea part, where we can see bubbles using ArrayList and class. Again, using Openprocessing, we use a particle example to imitate fish swimming and following people’s hand so it looks fun and interactive.

After finishing the animation without the sensors, we start to do some physical setting and attach the sensors. We purchase one bubble gun and one usb fan so that we can use the fan to blow the wind on people’s faces and create bubbles when playing with fish. The wind aims to mimic the feeling of wind blowing when flying and the bubbles add to the romance.

For the sensor part, we tried the kinect library in Processing and find that skeleton colors is the best code for our project, so we add this code to our animation code. In the code, when the camera detects that you hands’ positions are parallel to  your shoulders’, then you will “fly”: you will get closer to the sun. And if you jump at a certain point, you can dive into the sea using the changing head cooridinates.

Then we use right hand’s coordinates to control the fish.

At the same time, we use the school’s 3D printer to print a doorknob to add to the variety of ways of interaction. We plan to insert a potentiometer inside so that a move of twisting can open the door and unlock the next scene.

So we add a door and a mermaid’s palace scene in addition to the existing scene. And we use the fish swimming to inform the users to open the door. When you play with the fish enough, the fish will be out of your control and lingering around the physical knob that we plant on the projection of a door on the cave. Unfortunately, we plan to load a video where the mermaid sings but it fails and keeps in the loop. So we have to ditch that idea.


We are told that we cannot use the projector in the show, so we set up and run everything in the academic building in the night. Using white fabric and poles, we create a site.

We make a film of the whole site working and then create a smaller version that can be displayed on a computer screen.

Users manuals:

Open the program, hold up your arms and do like what Superman does(wind)

Wait until you are close enough to the sun and stop flying, jump and dive(bubbles)

Use your hands to play and interact with the fish, let them tracks you

Follow the fish and twist the doorknob



Always pay attention to details: the bubble gun and the fan are not working perfectly, they go on and off.

Choose the right place to set the project: the Kinect camera will malfunction when detecting too many people.

Invest more time in coding and debugging the codes: the 3D animation has a major flaw which you can see the sun(a golden globe) beneath the sea level and the framerate is disappointing.

Interaction can be 3D: using interfaces is not the only way to do interaction

3D modeling is not necessarily a hard thing: using Tinkercad, 3D modeling is not so hard

Learn how to create multiple scenes in processing

Learn how to create/ how to modify 3D animation in processing

Learn how to find one certain line of code in a short period of time.


Better setting is needed: the curtain and poles fail to create a more immersive background

More sensors and more components: Having Kinect, a fan, a bubble gun(driven by a motor) and a potentiometer isn’t interactive enough. Adding more sensors and components can bring more fun.

Improve the framerate

Improve the animation: make it more visually appealing

Bring in more stories/ Have a more complex and intriguing storyline

Polish the whole project

PS: Due to the space limit, the Arduino code won’t be listed.

//3D Terrain Generation - Adam Vozzo
//Movie myMovie;
PImage image,image2;
int theater=0;
 boolean sit = false;
 String message1 = "Superman, welcome to my palace!" ;
 String message2 = "Superman, soar!";
 String message3 = "Squat to dive!";
  //String message4 = "please give me your applause!";
PImage door,mountain;
 PGraphics underwaterz;
 bubble[] bubbleArray = new bubble[100];
  int speed1=1;
     int  speed2=-1;
int z = 0;
int t=0;
int prevt=0;
int cols, rows;
int scl = 20; //Scale of waves
int m;
boolean push;
//fill view with terrain
int w =6000 ; // width of sea
int h = 8000; //  height of sea

float flying = 0; //the speed at which the noise generation is moved

float [][] terrain; //2d array to make the grid
float stopY,handY;
boolean dive=false;
color c1 = color(35, 205, 219); //strip fill
color c2 = color(22, 57, 180); //strip stroke

//To aid colour variability
float ca = 0; 
float shoulderY,handX;
//scanline thickness
int t1;
//Sun Rotation
float r1 = 0, sunrise=800;
float perspective= PI/2;
boolean applause=false;
import processing.serial.*;

Serial myPort;
int valueFromArduino;

void setup() {
  image = loadImage("theater2.jpg");
  image2 = loadImage("little mermaid.png");

   kinect = new KinectPV2(this);


  colorMode(HSB, 360); //HSB to have better control over the brightness of colours, and to smoothly transition the background

  //The size of the grid
  cols = w / scl;
  rows = h / scl;

  terrain = new float [cols] [rows];

  t1 = 10; //Decide the thickness of scanlines
  for (int i=0; i< bubbleArray.length;i++){
   bubbleArray[i]= new bubble((int)random(100,300),(int)random(height+10,height+1000),(int)random(15,55));
 bouncers = new ArrayList();

  for (int i = 0; i < 200; i++)
    Mover m = new Mover();
    bouncers.add (m);
  frameRate (30);
  underwaterz = createGraphics(width,height);
  // this prints out the list of all available serial ports on your computer.
   myPort = new Serial(this, Serial.list()[ 0 ], 9600);

void draw() {
  if(abs(trackY-handY)<150 &&  abs(shoulderY-trackY)<200){push=true;}else{push=false;}
//print(mouseX); print(","); println (mouseY);
 if (z>2000&&z<2500){stopY=headY;}
if (z<3000&& push==true){
text(message2, 800, 400);
if( z==3000){text(message3, 850, 400);}
if(z==3000 && headY-stopY>300){dive=true;}
if(dive==true &&t<200 && prevt<=t){prevt=t;t+=20;}
//else if(h<4000){h++;println(h);}
//else if (perspective> PI*0.49){perspective=perspective-PI/300;}
  //Lights coming from different angles to achieve desired lighting
  pointLight(255, 255, 255, -width, -height, -width);
  pointLight(255, 255, 255, width, height, width);

  //Sunlight reflection
  //the combination of these spotlights increases the intensity of the light closer to the sun
  spotLight(50, 30, 60, width/6.6, -height-300, -500, width/5, height, -100, PI/10, 3); //Sun Reflection off water
  spotLight(50, 300, 600, width/6.6, -height, -200, width/5, height-400, -400, PI/2, 3); //brightens the ocean and sun

  //println(frameRate); //to analyse what slows the sketch and make it more efficient

  //smooth colour transition background
  if (ca > 360) {
    ca = 0;
  } else {
    ca += 1;
  color c6 = color(ca, 200, 200);

  //rotation speed and creation of the sun
  r1 += 0.008;

  //calculating the movement of the grid
  flying -= 0.02;
  float yoff = flying; //y offset
  for (int y = 0; y < rows; y++) {
    float xoff = 0; //x offset
    for (int x = 0; x < cols; x++) {
      terrain [x][y] = map(noise(xoff, yoff), 0, 1, -150, 130); //smaller mapping of noise, lower waves
      xoff += 0.02; //smaller the value, more precise noise calculation
    yoff+= 0.09; //smaller the value, more precise noise calculation

  //Matrix to stop the sun and scanlines being affected by the moevement here
  //Defining the properties of the grid
  translate (width/2, height/2+200);
  rotateX(perspective); //60 degrees, flyover perspective

  translate(-w/2, -h/2); //centers the triangle strip

  //need to consistently adjust the Z-axis of the grid, not just random
  //achieved using the noise above
  //Nested loop to determine grid verticies
  for (int y = 0; y < rows-1; y++) {
    for (int x = 0; x < cols; x++) {
      vertex(x*scl, y*scl, terrain [x] [y]);
      vertex(x*scl, (y+1)*scl, terrain [x] [y+1]);

  //Layer lines over the grid and Sun to look like a CRT monitor
else {underwater();}
 /*if (millis()-m<10000) {
  else {
class bubble{
   int s1;
   int s3= 2;
   int s2;
  int x, y;
  float ex,ey;
  bubble (int x, int y, int z){
void display(){
underwaterz.fill(255, 10);
  underwaterz.ellipse (ex, ey, s1, s1);
//  ey--;
  underwaterz.arc(ex, ey, s2, s2, radians(200), radians(260)); 
  underwaterz.arc(ex, ey, s2, s2, radians(300), radians(310));

ArrayList <Mover> bouncers;
float trackX,trackY;
float headX,headY;
float bottomX,bottomY;
int bewegungsModus = 3;
class Mover
  PVector Fdirection;
  PVector Flocation;

  float Fspeed;
  float FSPEED;

  float FnoiseScale;
  float FnoiseStrength;
  float FforceStrength;

  float FellipseSize;
  color Fc;

  Mover () // Konstruktor = setup der Mover Klasse

  Mover (float Fx, float Fy) // Konstruktor = setup der Mover Klasse
    setRandomValues ();

  // SET ---------------------------

  void setRandomValues ()
    Flocation = new PVector (random (width), random (height));
    FellipseSize = random (4, 15);

    float Fangle = random (TWO_PI);
    Fdirection = new PVector (cos (Fangle), sin (Fangle));

    Fspeed = random (4, 7);
    FSPEED = Fspeed;
    FnoiseScale = 80;
    FnoiseStrength = 1;
    FforceStrength = random (0.1, 0.2);

  void setRandomColor ()
    int colorDice = (int) random (4);

    if (colorDice == 0) Fc = #ffedbc;
    else if (colorDice == 1) Fc = #A75265;
    else if (colorDice == 2) Fc = #ec7263;
    else Fc = #febe7e;

  // GENEREL ------------------------------

  void update ()
    update (0);

  void update (int Fmode){
    if (Fmode == 3) // seek
      Fspeed = FSPEED * 0.7;
     seek (trackX, trackY);}

  // FLOCK ------------------------------

  void flock (ArrayList <Mover> boids)

    PVector Fother;
    float FotherSize ;

    PVector FcohesionSum = new PVector (0, 0);
    float FcohesionCount = 0;

    PVector FseperationSum = new PVector (0, 0);
    float FseperationCount = 0;

    PVector FalignSum = new PVector (0, 0);
    float FspeedSum = 0;
    float FalignCount = 0;

    for (int Fi = 0; Fi < boids.size(); Fi++)
      Fother = boids.get(Fi).Flocation;
      FotherSize = boids.get(Fi).FellipseSize;

      float Fdistance = PVector.dist (Fother, Flocation);

      if (Fdistance > 0 && Fdistance <70) //align + cohesion
        FcohesionSum.add (Fother);

        FalignSum.add (boids.get(Fi).Fdirection);
        FspeedSum += boids.get(Fi).Fspeed;

      if (Fdistance > 0 && Fdistance < (FellipseSize+FotherSize)*1.2) // seperate bei collision
        float Fangle = atan2 (Flocation.y-Fother.y, Flocation.x-Fother.x);

        FseperationSum.add (cos (Fangle), sin (Fangle), 0);

      if (FalignCount > 8 && FseperationCount > 12) break;

    // cohesion: bewege dich in die Mitte deiner Nachbarn
    // seperation: renne nicht in andere hinein
    // align: bewege dich in die Richtung deiner Nachbarn

    if (FcohesionCount > 0)
      FcohesionSum.div (FcohesionCount);
      cohesion (FcohesionSum, 1);

    if (FalignCount > 0)
      FspeedSum /= FalignCount;
      FalignSum.div (FalignCount);
      align (FalignSum, FspeedSum, 1.3);

    if (FseperationCount > 0)
      FseperationSum.div (FseperationCount);
      seperation (FseperationSum, 2);

  void cohesion (PVector Fforce, float Fstrength)
    steer (Fforce.x, Fforce.y, Fstrength);

  void seperation (PVector Fforce, float Fstrength)
    Fforce.limit (Fstrength*FforceStrength);

    Fdirection.add (Fforce);

    Fspeed *= 1.1;
    Fspeed = constrain (Fspeed, 0, FSPEED * 1.5);

  void align (PVector Fforce, float FforceSpeed, float Fstrength)
    Fspeed = lerp (Fspeed, FforceSpeed, Fstrength*FforceStrength);

    Fforce.mult (Fstrength*FforceStrength);

    Fdirection.add (Fforce);

  // HOW TO MOVE ----------------------------

  void steer (float Fx, float Fy)
    steer (Fx, Fy, 1);

  void steer (float Fx, float Fy, float Fstrength)

    float Fangle = atan2 (Fy-Flocation.y, Fx -Flocation.x);

    PVector Fforce = new PVector (cos (Fangle), sin (Fangle));
    Fforce.mult (FforceStrength * Fstrength);

    Fdirection.add (Fforce);

    float FcurrentDistance = dist (Fx, Fy, Flocation.x, Flocation.y);

    if (FcurrentDistance < 70)
      Fspeed = map (FcurrentDistance, 0, 70, 0, FSPEED);
    else Fspeed = FSPEED;

  void seek (float Fx, float Fy)
    seek (Fx, Fy, 1);

  void seek (float Fx, float Fy, float Fstrength)

    float Fangle = atan2 (Fy-Flocation.y, Fx -Flocation.x);

    PVector Fforce = new PVector (cos (Fangle), sin (Fangle));
    Fforce.mult (FforceStrength * Fstrength);

    Fdirection.add (Fforce);


  // MOVE -----------------------------------------

  void move ()

    PVector Fvelocity = Fdirection.get();
    Fvelocity.mult (Fspeed);
    Flocation.add (Fvelocity);

  // DISPLAY ---------------------------------------------------------------

  void Fdisplay ()
    underwaterz.fill (Fc);
    underwaterz.ellipse (Flocation.x, Flocation.y, FellipseSize, FellipseSize);
import KinectPV2.KJoint;
import KinectPV2.*;

KinectPV2 kinect;

void kinect(){
  ArrayList<KSkeleton> skeletonArray =  kinect.getSkeletonColorMap();

  //individual JOINTS
  for (int i = 0; i < skeletonArray.size(); i++) {
    KSkeleton skeleton = (KSkeleton) skeletonArray.get(i);
    if (skeleton.isTracked()) {
      KJoint[] joints = skeleton.getJoints();

      color col  = skeleton.getIndexColor();

      //draw different color for each hand state
      trackX=joints[ KinectPV2.JointType_HandTipRight].getX();
trackY=joints[ KinectPV2.JointType_HandTipRight].getY();
handY=joints[ KinectPV2.JointType_HandTipLeft].getY();
handX=joints[ KinectPV2.JointType_HandTipLeft].getX();

  fill(255, 0, 0);
  text(frameRate, 50, 50);

void drawBody(KJoint[] joints) {

  drawJoint(joints, KinectPV2.JointType_HandTipLeft);
  drawJoint(joints, KinectPV2.JointType_HandTipRight);

  drawJoint(joints, KinectPV2.JointType_Head);
    drawJoint(joints, KinectPV2.JointType_SpineBase);

//draw joint
void drawJoint(KJoint[] joints, int jointType) {
  translate(joints[jointType].getX(), joints[jointType].getY());
// print(joints[jointType].getX());
  underwaterz.ellipse(0, 0, 25, 25);

void handState(int handState) {
  switch(handState) {
  case KinectPV2.HandState_Open:
    fill(0, 255, 0);
  case KinectPV2.HandState_Closed:
    fill(255, 0, 0);
  case KinectPV2.HandState_Lasso:
    fill(0, 0, 255);
  case KinectPV2.HandState_NotTracked:
    fill(255, 255, 255);
void scanlines(int thicc) {
  translate(0, 0, 350); //needs to be translated forward so the lines dont clip with the strip
  stroke(30, 50); //added some transparency so the lines dont darken the image as much
  strokeWeight(thicc); //Can vary the thickness of the lines, but looks better consistent

  //Drawing lines from the top of the window to the bottom, with gaps to emulate real scanlines
  for (int i = 0; i < height; i++) {
    if (i % 4 == 1) {
      line(0, i, width, i);
void sun(float rotation, float sunrise) {
  PVector location = new PVector(width/2,100+height/2,-1200); //not needed, but keeping for referece
 // println(location.x);
  pushMatrix(); //Matrix so the rotations dont effect the triangle strip
  translate(0, sunrise, -7000); //Pushed far back behind the strip
  translate(width/2, 100+height/2);
  fill(1060, 1000, 1000); //60
  fill(50, 1000, 1000, 200); //As bright and saturated as possible, but transparency darkens it
  stroke(60, 150, 300);
  sphere(500); //A larger transparent sphere with stroke, ecompassing the smaller sphere
void theater() {
  // myMovie = new Movie(this, "little");
  translate(0, 0, theater);
  rect(0, 0, width, 50);
  rect(0, height-50, width, 50);
  image(image, -100, 50, width+200, height-100);
  image(image2, 900, 600, 150, 300);
  if (theater<110) {
  if (theater==110) {

    text(message1, 650, 350);
    if (sit==false) {
      text(message1, 650, 350+);
      text(message2, 600, 450);
    } else {
      text(message3, 590, 350);
      text(message4, 670, 450);

    if (applause==true) {
      image(myMovie, 0, 0, width, height);
  if (bottomY==200)
    sit = true;
  if (abs(trackY-handY)<150 && abs(trackX-handX)<50)

/*void movieEvent(Movie m) {;

void underwater(){
  while ( myPort.available() > 0) {
    valueFromArduino =;
//if(millis()-m<20000){trackX=joints[ KinectPV2.JointType_HandTipRight].getX();
//trackY=joints[ KinectPV2.JointType_HandTipRight].getY();}
  for ( int i=0;i <= height;i++){
    /*for (int z =1; z<10;z++){
 for ( int i=10*z; i<10*(z+1)-5;i++){

    for (int i=10*(z+1)-4; i< 10*(z+1) ;i++){

  int Fishi = 0;
  while (Fishi < bouncers.size () )
    Mover m = bouncers.get(Fishi);
    if (bewegungsModus != 5) m.update (bewegungsModus);
    Fishi = Fishi + 1;
void underwater2(){
for ( int i=0;i <= height;i++){

In class assignment Computer Vision

For the example of Computer Vision, I choose Biometrics as the topic. The reason why I choose this topic is that biometrics has, in fact, become a part of our daily life. As we all know, fingerprint reading is a mature technology and has been used in everyday objects like cell phones and laptops. So I did some research on it. People have developed several recognition technologies, including fingerprint recognition, facial recognition, body movement recognition etc. Recognition technologies, except for fingerprint and facial recognition, are still developing technologies or has not yet been used in civil devices, so I would like to talk more about facial recognition. Facial recognition, using several infrared sensors and a camera to capture an image and locate the face. The using complex algorithms to do the recognition. During the developing process, I think one of the major problems developers meet is the algorithms. How many data should we extract from the sensors’ values and how can we utilize these data to tell the differences. In my daily life, I use facial recognition a lot. My laptop is equipped with infrared cameras and has Windows Hello function. Though the fact that it sometimes can’t recognize my face bothers me a lot, the function is actually extremely useful and very fast, which leaves me amazed and triggers my curiosity towards facial recognition technology.

Lab11 Media controller

Date: Nov.25th

Instructor: Rudi


Now that you know how to work with images, video, webcam and audio in Processing, your assignment is to create a Processing sketch that controls media elements (images, videos and audio) using a physical controller made with Arduino.

This will demonstrate your understanding of Processing and Arduino. Think about how you can incorporate interactivity and computation into this week’s exercise.

For the physical controller, think about what physical values you could receive and use. You can use potentiometers or sensors in your kit, or you are welcome to check out and use a sensor from the list below.

For the Processing sketch, you can load and manipulate any media element, such as image, video, webcam and audio. You can change the size, ratio, color or pixel values of an image, or you can create a VJ-like effect by changing the position of the movie being played according to the value of the sensor. It will also be interesting to change the volume, rate, and pan of an audio sample.

You can use your own content or any image, video, and audio that you want, as long as you credit your sources. Keep intact any copyright or creative commons notices for the sound, video or image. Credit the original creator of the work whenever specified. Include the title of the work and the original URL for the work if applicable.

Here are a few resources for public domain and creative commons sound and images:

Experience: After I tried to add the live feed into the yellowtail example, I decided to use potentiometers to draw the line instead of pressing and dragging the mouse. First, I added camera capture to the code and create something like this:

However, after I successfully set the code of potentiometers and sent the data to Processing, I found that it was extremely hard to use the data from potentiometers to draw the lines and it was impossible (for me) to capture the color of the video and use it as the lines’ color. After trying for the whole weekend, I feel that I still can’t achieve the effects I want.

 * Yellowtail
 * by Golan Levin ( 
 * Click, drag, and release to create a kinetic gesture.
 * Yellowtail (1998-2000) is an interactive software system for the gestural 
 * creation and performance of real-time abstract animation. Yellowtail repeats 
 * a user's strokes end-over-end, enabling simultaneous specification of a 
 * line's shape and quality of movement. Each line repeats according to its 
 * own period, producing an ever-changing and responsive display of lively, 
 * worm-like textures.

import java.awt.Polygon;

Gesture gestureArray[];
final int nGestures = 36;  // Number of gestures
final int minMove = 3;     // Minimum travel for a new point
int currentGestureID;

Polygon tempP;
int tmpXp[];
int tmpYp[];
Capture cam;
import processing.serial.*;

String myString = null;
Serial myPort;

int[] sensorValues;      /** this array stores values from Arduino **/

void setup() {
  size(640, 480, P2D);
    cam = new Capture(this, 640, 480);
  background(0, 0, 0);

  currentGestureID = -1;
  gestureArray = new Gesture[nGestures];
  for (int i = 0; i < nGestures; i++) {
    gestureArray[i] = new Gesture(width, height);

void draw() {

  if (cam.available()) {; 
  image(cam, 0, 0);

 int r=int(random(0,255));
   int g=int(random(0,255));
   int b=int(random(0,255));
  for (int i = 0; i < nGestures; i++) {
    renderGesture(gestureArray[i], width, height);

void serialEvent(Serial myPort)() {
  currentGestureID = (currentGestureID+1) % nGestures;
  Gesture G = gestureArray[currentGestureID];
  G.addPoint(mouseX, mouseY);

void mouseDragged() {
  if (currentGestureID >= 0) {
    Gesture G = gestureArray[currentGestureID];
    if (G.distToLast(mouseX, mouseY) > minMove) {
      G.addPoint(mouseX, mouseY);

void keyPressed() {
  if (key == '+' || key == '=') {
    if (currentGestureID >= 0) {
      float th = gestureArray[currentGestureID].thickness;
      gestureArray[currentGestureID].thickness = min(96, th+1);
  } else if (key == '-') {
    if (currentGestureID >= 0) {
      float th = gestureArray[currentGestureID].thickness;
      gestureArray[currentGestureID].thickness = max(2, th-1);
  } else if (key == ' ') {

void renderGesture(Gesture gesture, int w, int h) {
  if (gesture.exists) {
    if (gesture.nPolys > 0) {
      Polygon polygons[] = gesture.polygons;
      int crosses[] = gesture.crosses;

      int xpts[];
      int ypts[];
      Polygon p;
      int cr;

      int gnp = gesture.nPolys;
      for (int i=0; i<gnp; i++) {

        p = polygons[i];
        xpts = p.xpoints;
        ypts = p.ypoints;

        vertex(xpts[0], ypts[0]);
        vertex(xpts[1], ypts[1]);
        vertex(xpts[2], ypts[2]);
        vertex(xpts[3], ypts[3]);

        if ((cr = crosses[i]) > 0) {
          if ((cr & 3)>0) {
            vertex(xpts[0]+w, ypts[0]);
            vertex(xpts[1]+w, ypts[1]);
            vertex(xpts[2]+w, ypts[2]);
            vertex(xpts[3]+w, ypts[3]);

            vertex(xpts[0]-w, ypts[0]);
            vertex(xpts[1]-w, ypts[1]);
            vertex(xpts[2]-w, ypts[2]);
            vertex(xpts[3]-w, ypts[3]);
          if ((cr & 12)>0) {
            vertex(xpts[0], ypts[0]+h);
            vertex(xpts[1], ypts[1]+h);
            vertex(xpts[2], ypts[2]+h);
            vertex(xpts[3], ypts[3]+h);

            vertex(xpts[0], ypts[0]-h);
            vertex(xpts[1], ypts[1]-h);
            vertex(xpts[2], ypts[2]-h);
            vertex(xpts[3], ypts[3]-h);

          // I have knowingly retained the small flaw of not
          // completely dealing with the corner conditions
          // (the case in which both of the above are true).

void updateGeometry() {
  Gesture J;
  for (int g=0; g<nGestures; g++) {
    if ((J=gestureArray[g]).exists) {
      if (g!=currentGestureID) {
      } else if (!mousePressed) {

void advanceGesture(Gesture gesture) {
  // Move a Gesture one step
  if (gesture.exists) { // check
    int nPts = gesture.nPoints;
    int nPts1 = nPts-1;
    Vec3f path[];
    float jx = gesture.jumpDx;
    float jy = gesture.jumpDy;

    if (nPts > 0) {
      path = gesture.path;
      for (int i = nPts1; i > 0; i--) {
        path[i].x = path[i-1].x;
        path[i].y = path[i-1].y;
      path[0].x = path[nPts1].x - jx;
      path[0].y = path[nPts1].y - jy;

void clearGestures() {
  for (int i = 0; i < nGestures; i++) {
void setupSerial() {
  myPort = new Serial(this, Serial.list()[ 0 ], 9600);
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = 'n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[2];

void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = 'n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
class Gesture {

  float  damp = 5.0;
  float  dampInv = 1.0 / damp;
  float  damp1 = damp - 1;

  int w;
  int h;
  int capacity;

  Vec3f path[];
  int crosses[];
  Polygon polygons[];
  int nPoints;
  int nPolys;

  float jumpDx, jumpDy;
  boolean exists;
  float INIT_TH = 14;
  float thickness = INIT_TH;

  Gesture(int mw, int mh) {
    w = mw;
    h = mh;
    capacity = 600;
    path = new Vec3f[capacity];
    polygons = new Polygon[capacity];
    crosses  = new int[capacity];
    for (int i=0;i<capacity;i++) {
      polygons[i] = new Polygon();
      polygons[i].npoints = 4;
      path[i] = new Vec3f();
      crosses[i] = 0;
    nPoints = 0;
    nPolys = 0;

    exists = false;
    jumpDx = 0;
    jumpDy = 0;

  void clear() {
    nPoints = 0;
    exists = false;
    thickness = INIT_TH;

  void clearPolys() {
    nPolys = 0;

  void addPoint(float x, float y) {

    if (nPoints >= capacity) {
      // there are all sorts of possible solutions here,
      // but for abject simplicity, I don't do anything.
    else {
      float v = distToLast(x, y);
      float p = getPressureFromVelocity(v);

      if (nPoints > 1) {
        exists = true;
        jumpDx = path[nPoints-1].x - path[0].x;
        jumpDy = path[nPoints-1].y - path[0].y;


  float getPressureFromVelocity(float v) {
    final float scale = 18;
    final float minP = 0.02;
    final float oldP = (nPoints > 0) ? path[nPoints-1].p : 0;
    return ((minP + max(0, 1.0 - v/scale)) + (damp1*oldP))*dampInv;

  void setPressures() {
    // pressures vary from 0...1
    float pressure;
    Vec3f tmp;
    float t = 0;
    float u = 1.0 / (nPoints - 1)*TWO_PI;
    for (int i = 0; i < nPoints; i++) {
      pressure = sqrt((1.0 - cos(t))*0.5);
      path[i].p = pressure;
      t += u;

  float distToLast(float ix, float iy) {
    if (nPoints > 0) {
      Vec3f v = path[nPoints-1];
      float dx = v.x - ix;
      float dy = v.y - iy;
      return mag(dx, dy);
    else {
      return 30;

  void compile() {
    // compute the polygons from the path of Vec3f's
    if (exists) {

      Vec3f p0, p1, p2;
      float radius0, radius1;
      float ax, bx, cx, dx;
      float ay, by, cy, dy;
      int   axi, bxi, cxi, dxi, axip, axid;
      int   ayi, byi, cyi, dyi, ayip, ayid;
      float p1x, p1y;
      float dx01, dy01, hp01, si01, co01;
      float dx02, dy02, hp02, si02, co02;
      float dx13, dy13, hp13, si13, co13;
      float taper = 1.0;

      int  nPathPoints = nPoints - 1;
      int  lastPolyIndex = nPathPoints - 1;
      float npm1finv =  1.0 / max(1, nPathPoints - 1);

      // handle the first point
      p0 = path[0];
      p1 = path[1];
      radius0 = p0.p * thickness;
      dx01 = p1.x - p0.x;
      dy01 = p1.y - p0.y;
      hp01 = sqrt(dx01*dx01 + dy01*dy01);
      if (hp01 == 0) {
        hp02 = 0.0001;
      co01 = radius0 * dx01 / hp01;
      si01 = radius0 * dy01 / hp01;
      ax = p0.x - si01; 
      ay = p0.y + co01;
      bx = p0.x + si01; 
      by = p0.y - co01;

      int xpts[];
      int ypts[];

      int LC = 20;
      int RC = w-LC;
      int TC = 20;
      int BC = h-TC;
      float mint = 0.618;
      float tapow = 0.4;

      // handle the middle points
      int i = 1;
      Polygon apoly;
      for (i = 1; i < nPathPoints; i++) {
        taper = pow((lastPolyIndex-i)*npm1finv,tapow);

        p0 = path[i-1];
        p1 = path[i  ];
        p2 = path[i+1];
        p1x = p1.x;
        p1y = p1.y;
        radius1 = Math.max(mint,taper*p1.p*thickness);

        // assumes all segments are roughly the same length...
        dx02 = p2.x - p0.x;
        dy02 = p2.y - p0.y;
        hp02 = (float) Math.sqrt(dx02*dx02 + dy02*dy02);
        if (hp02 != 0) {
          hp02 = radius1/hp02;
        co02 = dx02 * hp02;
        si02 = dy02 * hp02;

        // translate the integer coordinates to the viewing rectangle
        axi = axip = (int)ax;
        ayi = ayip = (int)ay;
        axid = axi-axip;
        ayid = ayi-ayip;

        // set the vertices of the polygon
        apoly = polygons[nPolys++];
        xpts = apoly.xpoints;
        ypts = apoly.ypoints;
        xpts[0] = axi = axid + axip;
        xpts[1] = bxi = axid + (int) bx;
        xpts[2] = cxi = axid + (int)(cx = p1x + si02);
        xpts[3] = dxi = axid + (int)(dx = p1x - si02);
        ypts[0] = ayi = ayid + ayip;
        ypts[1] = byi = ayid + (int) by;
        ypts[2] = cyi = ayid + (int)(cy = p1y - co02);
        ypts[3] = dyi = ayid + (int)(dy = p1y + co02);

        // keep a record of where we cross the edge of the screen
        crosses[i] = 0;
        if ((axi<=LC)||(bxi<=LC)||(cxi<=LC)||(dxi<=LC)) { 
        if ((axi>=RC)||(bxi>=RC)||(cxi>=RC)||(dxi>=RC)) { 
        if ((ayi<=TC)||(byi<=TC)||(cyi<=TC)||(dyi<=TC)) { 
        if ((ayi>=BC)||(byi>=BC)||(cyi>=BC)||(dyi>=BC)) { 

        //swap data for next time
        ax = dx; 
        ay = dy;
        bx = cx; 
        by = cy;

      // handle the last point
      p2 = path[nPathPoints];
      apoly = polygons[nPolys++];
      xpts = apoly.xpoints;
      ypts = apoly.ypoints;

      xpts[0] = (int)ax;
      xpts[1] = (int)bx;
      xpts[2] = (int)(p2.x);
      xpts[3] = (int)(p2.x);

      ypts[0] = (int)ay;
      ypts[1] = (int)by;
      ypts[2] = (int)(p2.y);
      ypts[3] = (int)(p2.y);


  void smooth() {
    // average neighboring points

    final float weight = 18;
    final float scale  = 1.0 / (weight + 2);
    int nPointsMinusTwo = nPoints - 2;
    Vec3f lower, upper, center;

    for (int i = 1; i < nPointsMinusTwo; i++) {
      lower = path[i-1];
      center = path[i];
      upper = path[i+1];

      center.x = (lower.x + weight*center.x + upper.x)*scale;
      center.y = (lower.y + weight*center.y + upper.y)*scale;

class Vec3f {
  float x;
  float y;
  float p;  // Pressure

  Vec3f() {
    set(0, 0, 0);
  Vec3f(float ix, float iy, float ip) {
    set(ix, iy, ip);

  void set(float ix, float iy, float ip) {
    x = ix;
    y = iy;
    p = ip;

Recitation modeling

Date: Nov.17th

Instructor: Rudi

Task: Using Adobe Illustrator and ThinkerCAD to create a print and a 3d model.


Following the instructions, I created the 2d print in Illustrator that can apply to laser cutter to create a stamp.

And during the recitation, I used ThinkerCAD to create a wearable bracelet which can hold an accelerometer to control a step motor in a flower pot.


Recitation Workshop

Date: Nov 26th

Instructor: Rudi

Task: Attend one of the three workshops

Experience: I attended Luis and Jack’s workshop on object-oriented programming in which we learned how to use class and array list. As an assignment in class, I created an animation in which you could see punches if you press the key.

ArrayList<Fist> fists=new ArrayList<Fist>();
int quantity=10;
void setup() {
  size(1000, 1000);
  for (int i = 0; i < quantity; i++) {
    int positionX = (int)random(width);
    int positionY = (int)random(width);
    int speed = (int)random(2,5);
    fists.add(new Fist(positionX, positionY, speed));

void draw() {
if (keyPressed==true){
  for (int i = 0; i < fists.size(); i++) {
    Fist fist = fists.get(i);


  for (Fist fist : fists) {
  //  fist.bounce();
class Fist {
  //variables that are part of my object
  int x = 0;
  int y = 0;
  int speedX = 5;
  int speedY = 7;
  color c;

  Fist(int inX, int inY, int inS) {
    x = inX;
    y = inY;
    speedX = inS;
    speedY = inS;

  void change() { 
    // Change the x location by speed
    x = int(random(0,1000));
    y = int(random(0,1000));

  void display() {


Essay for the final

Thoughts from the interaction lab

Interaction is a complicated yet intriguing concept. My idea of interaction is quite simple: interaction is how you use your body to make changes in real life. This change can be either practical or artistic. By using the body, I mean using every component of your body, like hands, feet or even eyeballs. Then through an interface, while setting or using, creating a change in real life. It can be a door opening, a screen playing, or even a robot working. Interaction is how you communicate with non-human not-living creatures. It’s the conversation between life and no life.

Inspired by the light matrix and the article about infinite future of 3D interaction, we come to the idea to build an interactive theatre to eliminate the difficulties we have observed when putting on a show. The light matrix shows us the infinite beauty that can be created by technology and the reading demonstrates the limitless possibilities of interaction. A human can not only create utility project through interaction but also pure arts, and the interaction can happen more than just on the two-dimensional surface. The project aims at solving the difficulty and complexity when producing a show and creating a more immersive surrounding for the audience. In the brainstorming process, we come up with the idea to use the infrared emitter and receiver to let the light matrix follow the user while using various sensors to push the storyline, and finally create an immersive background for the user who is actually participating in the drama and for the audience who are watching the drama. The is created for not only the professional drama production team but also for the amateur individual. It provides opportunities for individuals to produce their own dramas and theatre experience.

What reactions we expect from this project are first, an intense wonder. We hope that with a large screen, we can let the audience wow at what they see and what they hear. And after that, there should be another wow on how astounding the basic interactions are. Next, we expect the users to go with the instructions and really feel the story we have set. Even though it might be a short clip from we are picturing right now, we sincerely want the user and the audience to see the potential in this project and feel the possibilities it has to offer.

However, there are still a few major problems remain to be solved. First is how we should plant our sensors and what kind of stories should we put in it. We want it to be an open-ended story so that the users can act whatever they like. The second one is where can we find a larger screen that is big enough to produce an immersive atmosphere. Other problems like adjusting the sensors are still pending to be solved.

Lab 8- Drawing Machines Frederick & Forrest

Date: Nov.10th

Instructor: Rudi

Partner: Forrest( from Sean’s class)

Task: 1. Build the following circuit to control the stepper. 2. Use your potentiometer and the MotorKnob example to control your motor. Please note that 42STH33-0404AC stepper motor is a 200 step motor. You can use the function map() in order to match the movement of the knob with the rotation of the motor. 3. Write a sketch on Processing that send values to Arduino. Replace the potentiometer by using the values from Processing to control the motor. 4. Then, find another person to work with. Combine your parts into a mechanical arm that can hold a marker —see the picture below. Use processing to control the movement of your motor, and make adjustments with your partner.You have just created a drawing machine!


First Forrest and I assembled the circuit along and by using the example, we made the motors work.

After this, we used the potentiometer to control the motor by modifying the MotorKnob example.(video missing)

Then I used the serial communication code to send data from Processing to Arduino. I chose to send back the mouseY to control the motor and it worked. So, we combined both stepper motors and created a Pollock-styled painting.

// IMA NYU Shanghai
// Interaction Lab

import processing.serial.*;

Serial myPort;
int valueFromArduino;

void setup() {
  size(500, 500);

  // this prints out the list of all available serial ports on your computer.
  myPort = new Serial(this, Serial.list()[ 0 ], 9600);
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

void draw() {
  // to send a value to the Arduino
 * MotorKnob
 * A stepper motor follows the turns of a potentiometer
 * (or other sensor) on analog input 0.
 * This example code is in the public domain.

#include <Stepper.h>

// change this to the number of steps on your motor
#define STEPS 200

// create an instance of the stepper class, specifying
// the number of steps of the motor and the pins it's
// attached to
Stepper stepper(STEPS, 8, 9, 10, 11);

// the previous reading from the analog input
int previous = 0;

int valueFromProcessing;
void setup() {
  // set the speed of the motor to 30 RPMs

void loop() {
  // get the sensor value
while (Serial.available()) {
    valueFromProcessing =;
 int val= map(valueFromProcessing,0,500,0,200);
  // move a number of steps equal to the change in the
  // sensor reading
  stepper.step(val - previous);

  // remember the previous value of the sensor
  previous = val;