Final Project Documentation | Boyan Xu (Sean’s Section)

1.Idea’s Origin:

When I first reflected on what I should do with my final project, I was accidentally listening to a lot of classical music. Maybe I just recalled a game called “Wii Music” in my sub-consciousness, I just come up with an idea of making a project that can give you a chance to become a conductor, which means you can control the playing speed of an orchestra.

My original idea was using the accelerometer to sense the waving of your hand and use three ultrasonic sensors to make it possible to control the three parts of a single song.

2.Background Research

My initial idea for my project is creating an interactive system that can give people a chance to become a conductor. At that time I didn’t realize how huge the task was to not only achieve the basic requirement but also make it a complete and reasonable project. And when I write this documentation, recalling how many difficulties I have solved, I suddenly realize how long the distance I have gone.

At the beginning of my project, I did a wide investigation of the relevant hardware(Arduino) and software(processing) information that I may be going to use, and how do conductors conduct. Here are parts of the URL of my primary investigation reading materials:”

 

3.How to read the waving speed

First I tried Genuino 101( another kind of chip), which is loaded an accelerometer, trying to use the violent change of the acceleration as the signal of counting. But later I found that the wire will largely destroy the experience of conducting. Later I tried to use Bluetooth to solve this problem, but another problem is that the electric source, namely the battery will still cause the same problem.

Finally, I decided to Leap Motion to capture the waving signal.

How to get the statistic that I need is another interesting challenge for me. I have to design an algorithm that can finally provide me with the speed of your waving or the equivalent. At first I fell into a meditation on how to use the derivatives of the change in position of your finger to calculate the speed, but later I realized that I do not need that accurate velocity, so I convert the requirement of velocity into the requirement of counting how many times your finger across three blocks in every 3 seconds.

After a further thought on this, I found that this problem can be changed into a calculus problem!!!!!! What I want to get is the temporary  speed of the index finger’s position, which will be captured be leap motion and then converted into two position values: (x,y)

When I add more blocks rather than only three until the number of blocks approaches to infinite, I can finally get a pretty precise velocity value!!!!!!! But since my project’s requirement for accuracy isn’t so strict, I can use mere 3 blocks to achieve the basic function. The core counting part is in below.

 

// ====================================================

if(fingerIndexX> 400 && fingerIndexX<1200 && fingerIndexY>0 && fingerIndexY< 500){
x1=255;

if(alreadyExecuted1 == false) {
x = x+1;
alreadyExecuted1 = true;

}
if(alreadyExecuted1 == true){
x = x;
}
alreadyExecuted2 = false;
alreadyExecuted3 = false;

}
else{x1 = 155;}

if(fingerIndexX> 0 && fingerIndexX<800 && fingerIndexY> 500 && fingerIndexY< 1000 ){
x2=255;
if(alreadyExecuted2 == false) {
x = x+1;
alreadyExecuted2 = true;
}
if(alreadyExecuted2 == true){
x = x;
}

alreadyExecuted1 = false;
alreadyExecuted3 = false;
alreadyExecuted4 = false;
alreadyExecuted5 = false;

}
else{x2=155;}

if(fingerIndexX> 800 && fingerIndexX<800+800 && fingerIndexY>500 && fingerIndexY< 1000){
x3=255;
if(alreadyExecuted3 == false) {
x = x+1;
alreadyExecuted3 = true;
}
if(alreadyExecuted3 == true){
x = x;
}
alreadyExecuted1 = false;
alreadyExecuted2 = false;
alreadyExecuted4 = false;
alreadyExecuted5 = false;
}
else{x3=155;}
// ====================================================

 

4. Feedback on conducting

Another problem I Met was the lack of feedback on your conducting.

First I tried to add a simple ellipse which can later trace your index finger’s position. But that design is lack of aesthetic value. So I get my inspiration from a sample code inserted in processing.js. I created a class called particle and let hundreds of particles give you the feedback.

That was proved pretty cool!  🙂

 

5. Multimedia’s effect on creating atmosphere

As you can see in the upper picture, there was no background at first. The conducting interface was so bare that can’s effectively hint that you are conduting an orchestra.

So I use this pic as the interface’s background.

Again, that was proved pretty cool!  🙂

6. Make the experience a “complete process”

My project only contained a conducting interface before. That’s not complete, for there was no necessary instruction as an outset or assessment for your conducting as an endpoint.

/* instructions pic

 

*/

/*assesment pic

*/

To stop the conducting sketch component when the instruction surface is on your screen I used “if” and “boolen” to ahcieve that.

// ==================================================

if(alreadyChosen == false){
soundfiles[0].rate(0);
soundfiles[1].rate(0);
soundfiles[2].rate(0);

if(instructionDone == false){

……

// ==================================================

To make the instruction surface last necessary time, I created a function as below:

int sec = second();
if(sec % 3 == 0 && visites != sec){
r = r+1;
visites = sec;
}

 

7.  Combining Arduino

The Leap Motion alone can already realize all my idea for the project. But thanks to professor Sean’s reminding, I notice that we are required to build an interacting system using processing AND arduino.

My first idea was using three infrared distance sensors to control the volume of the song, but when I finished my sketch and started to test, I found the statistic provided by IR distance sensor contains a looooooooooooooot of noise value which can make the volume of song totally a mass.

So I gave up the arduino sketch I already finished and trying to apply joy-sticker to my project.

 

8.Song Choosing System

The result of applying joy-sticker is that I came up with a new idea of creating a song choosing system!

That was later proved to make the project more interactive.

9. Appearance Designing
In this part, I mainly focused on two things:
1). Hide the Arduino, breadboard, and wires to avoid hinting experiencer this is mocking process.
2). Use two styrofoam block pedestals to hint the experiencer that there are only two components.
10. Tester feedback
I invited two of friends, in the condition of knowing nothing about my project, to test my project, in order to find whether the necessary information was given in time.
And I received some feedbacks and later I made some tiny change in my project.

 

import processing.serial.*;
import processing.sound.*;
import de.voidplus.leapmotion.*;
int totalParticles = 100;
float targetX = 0.0;
float targetY = 0.0;
float rms_scaled;
PImage img;
PImage img1;
PImage img2;
PImage back1;
PImage back2;
PImage ins1;
PImage res1;
PImage res2;
PImage res3;



PFont f1;
PFont f2;
PFont f3;
PFont f4;
PFont f5;

// ======================================================
String myString = null;
processing.serial.Serial myPort;

int NUM_OF_VALUE = 3;
int[] sensorValues;
// ======================================================




SoundFile[] soundfiles = new SoundFile[3];
int i = 0;


Amplitude rms;
float smooth_factor =0.35;
float sum;
float scale=5;




LeapMotion leap;
Particle[] particles = new Particle[totalParticles];
float fingerIndexX ;
float fingerIndexY ;
int x = 0;
int y = 0;
int z = 0;
int q = 0;
int r = 0;
int s1 =0;
int s2 =0;
int s3 =0;
int sm =0;

int Jx =0;
int Jy =0;

int posx =800;
int posy =600;

int Oldx = 0;
float rate = 4;
float realRate = 4;
int diff = 4;
int x1 = 155;
int x2 = 155;
int x3 = 155;
boolean alreadyExecuted1 = false;
boolean alreadyExecuted2 = false;
boolean alreadyExecuted3 = false;
boolean alreadyExecuted4 = false;
boolean alreadyExecuted5 = false;
long visited = 0;
long visites = 0;

boolean alreadyChosen = false;
boolean instructionDone= false;


int freq1;
int freq2;
int freq3;
int compare;


void setup() {
  size(1600, 1200);  
  background(0); 
  img = loadImage("pic1.png");
  img1 = loadImage("1.jpg");
  img2 = loadImage("2.png");
  back1 = loadImage("back1.png");
  back2 = loadImage("back2.png");
  ins1 = loadImage("ins1.png");
  res1 = loadImage("res1.png");
  res2 = loadImage("res2.png");
  res3 = loadImage("res3.png");

   // ==================================================
  f1 = createFont("Arial",32,true);
  f2 = createFont("Arial",32,true);
  f3 = createFont("Arial",32,true);
  f4 = createFont("Arial",32,true);
   
   // ==================================================
 for(int i = 0; i < totalParticles ; i++){
   Particle particle = new Particle();
   particles[i] = particle;
 }
    
   // ==================================================
    
  
  
  
  soundfiles[0] = new SoundFile(this, "6.mp3");
  soundfiles[1] = new SoundFile(this, "2.mp3");
  soundfiles[2] = new SoundFile(this, "4.mp3");
  
  
  soundfiles[0].loop();
  soundfiles[1].loop();
  soundfiles[2].loop();
  

  

  leap = new LeapMotion(this);
  
  // ================================================== 
  setupSerial();
}




void draw() {

 // ==================================================
  updateSerial();
  printArray(sensorValues);
 // ==================================================
  
  
    if(alreadyChosen == false){
       soundfiles[0].rate(0);
       soundfiles[1].rate(0);
       soundfiles[2].rate(0);
      
      if(instructionDone == false){
       
       int sec = second();
       if(sec % 3 == 0 && visites != sec){
        r = r+1;
        visites = sec;
        }
       
         
       if(r <= 6){
       image(back1,0,0);}
       
       else if(r <=14){
       image(ins1,0,0);}
       
       else if( r<= 16){
       image(back2,0,0);}
       
       else{instructionDone = true;}
       
        
      } else{
      
      background(0);
       soundfiles[0].rate(0);
       soundfiles[1].rate(0);
       soundfiles[2].rate(0);
       
       image(img1,900,0);
       pushMatrix();
       scale(2);
       image(img2,500,300);
       popMatrix();
       
       
       pushMatrix();
       fill(#F08E4D);
       translate(200,100);
       rect(0,0,600,250);
       rect(0,400,600,250);
       rect(0,800,600,250);
       popMatrix();
       

       textFont(f1,64);
       fill(0);
       text("1.La Java Va Bien",250,225);
     
       
       textFont(f2,64);
       fill(0);
       text("2.The Blue River",250,625);     
       
       textFont(f3,64);
       fill(0);
       text("3.Strauss Waltz",250,1025);  
      

       fill(#F08E4D);
       strokeWeight(5);
       ellipse(posx,posy,200,200);
       
       textFont(f4,25);
       fill(255);
       text("Press to Choose",posx-90,posy);  
      
      if(sensorValues[0]<441){
        Jx = 10;
      } else if(sensorValues[0]< 682){
        Jx = 0;
      } else if(sensorValues[0]<923){
        Jx = -10;
      }
        
      if(sensorValues[1]<441){
        Jy = -10;
      } else if(sensorValues[1]< 682){
        Jy = 0;
      } else if(sensorValues[1]<923){
        Jy = 10;        
      }
      

      posx = posx - Jx;
      posy = posy - Jy;
 

 if(posx >= 200 && posx <= 800 && posy >= 100 && posy <= 350 && sensorValues[2] == 0){
   i = 0;
   alreadyChosen = true;
   
 } 
 if(posx >= 200 && posx <= 800 && posy >= 500 && posy <= 750 && sensorValues[2] == 0){
   i = 1;
   alreadyChosen = true;
 } 
 if(posx >= 200 && posx <= 800 && posy >= 900 && posy <= 1125 && sensorValues[2] == 0){
   i = 2;
   alreadyChosen = true;
 }     
      } 
       
    }
    
    
    else{

  
noStroke();
smooth();
background(0);

image(img,0,0);

fill(#CE723C,x1);
rect(400,0,800,500);
fill(#AF5C2C,x2);
rect(0,500,800,500);
fill(#83401C,x3);
rect(800,500,800,500);
   // ==================================================
for (int i =0; i < totalParticles ; i++){
  particles[i].update();
  particles[i].render();
// Type different value into the parameter + make some change to the function render!
}

   // ==================================================

  // ...

  int fps = leap.getFrameRate();
  for (Hand hand : leap.getHands ()) {


    // ==================================================
    // 2. Hand

    int     handId             = hand.getId();
    PVector handPosition       = hand.getPosition();
    PVector handStabilized     = hand.getStabilizedPosition();
    PVector handDirection      = hand.getDirection();
    PVector handDynamics       = hand.getDynamics();
    float   handRoll           = hand.getRoll();
    float   handPitch          = hand.getPitch();
    float   handYaw            = hand.getYaw();
    boolean handIsLeft         = hand.isLeft();
    boolean handIsRight        = hand.isRight();
    float   handGrab           = hand.getGrabStrength();
    float   handPinch          = hand.getPinchStrength();
    float   handTime           = hand.getTimeVisible();
    PVector spherePosition     = hand.getSpherePosition();
    float   sphereRadius       = hand.getSphereRadius();

    // --------------------------------------------------
    // Drawing
   

    // ==================================================
    // 3. Arm

    if (hand.hasArm()) {
      Arm     arm              = hand.getArm();
      float   armWidth         = arm.getWidth();
      PVector armWristPos      = arm.getWristPosition();
      PVector armElbowPos      = arm.getElbowPosition();
    }


    // ==================================================
    // 4. Finger

    Finger  fingerThumb        = hand.getThumb();
    // or                        hand.getFinger("thumb");
    // or                        hand.getFinger(0);

    Finger  fingerIndex        = hand.getIndexFinger();
    // or                        hand.getFinger("index");
    // or                        hand.getFinger(1);

    Finger  fingerMiddle       = hand.getMiddleFinger();
    // or                        hand.getFinger("middle");
    // or                        hand.getFinger(2);

    Finger  fingerRing         = hand.getRingFinger();
    // or                        hand.getFinger("ring");
    // or                        hand.getFinger(3);

    Finger  fingerPink         = hand.getPinkyFinger();
    // or                        hand.getFinger("pinky");
    // or                        hand.getFinger(4);
    
    
    
    
    
     fingerIndexX = (fingerIndex.getPosition().x);
     fingerIndexY = (fingerIndex.getPosition().y);
     
     ellipse(fingerIndexX,fingerIndexY,100,100);
     

    for (Finger finger : hand.getFingers()) {
      // or              hand.getOutstretchedFingers();
      // or              hand.getOutstretchedFingersByAngle();

      int     fingerId         = finger.getId();
      PVector fingerPosition   = finger.getPosition();
      PVector fingerStabilized = finger.getStabilizedPosition();
      PVector fingerVelocity   = finger.getVelocity();
      PVector fingerDirection  = finger.getDirection();
      float   fingerTime       = finger.getTimeVisible();

      // ------------------------------------------------
      // Drawing

      // Drawing:
      // finger.draw();  // Executes drawBones() and drawJoints()
      // finger.drawBones();
      // finger.drawJoints();

      // ------------------------------------------------
      // Selection

      switch(finger.getType()) {
      case 0:
        // System.out.println("thumb");
        break;
      case 1:
        // System.out.println("index");
        break;
      case 2:
        // System.out.println("middle");
        break;
      case 3:
        // System.out.println("ring");
        break;
      case 4:
        // System.out.println("pinky");
        break;
      }


      // ================================================
      // 5. Bones
      // --------
      // https://developer.leapmotion.com/documentation/java/devguide/Leap_Overview.html#Layer_1

      Bone    boneDistal       = finger.getDistalBone();
      // or                      finger.get("distal");
      // or                      finger.getBone(0);

      Bone    boneIntermediate = finger.getIntermediateBone();
      // or                      finger.get("intermediate");
      // or                      finger.getBone(1);

      Bone    boneProximal     = finger.getProximalBone();
      // or                      finger.get("proximal");
      // or                      finger.getBone(2);

      Bone    boneMetacarpal   = finger.getMetacarpalBone();
      // or                      finger.get("metacarpal");
      // or                      finger.getBone(3);

      // ------------------------------------------------
      // Touch emulation

      int     touchZone        = finger.getTouchZone();
      float   touchDistance    = finger.getTouchDistance();

      switch(touchZone) {
      case -1: // None
        break;
      case 0: // Hovering
        // println("Hovering (#" + fingerId + "): " + touchDistance);
        break;
      case 1: // Touching
        // println("Touching (#" + fingerId + ")");
        break;
      }
    }


    // ==================================================
    // 6. Tools

    for (Tool tool : hand.getTools()) {
      int     toolId           = tool.getId();
      PVector toolPosition     = tool.getPosition();
      PVector toolStabilized   = tool.getStabilizedPosition();
      PVector toolVelocity     = tool.getVelocity();
      PVector toolDirection    = tool.getDirection();
      float   toolTime         = tool.getTimeVisible();

      // ------------------------------------------------
      // Drawing:
      // tool.draw();

      // ------------------------------------------------
      // Touch emulation

      int     touchZone        = tool.getTouchZone();
      float   touchDistance    = tool.getTouchDistance();

      switch(touchZone) {
      case -1: // None
        break;
      case 0: // Hovering
        // println("Hovering (#" + toolId + "): " + touchDistance);
        break;
      case 1: // Touching
        // println("Touching (#" + toolId + ")");
        break;
      }
    }
  }


  // ====================================================
  // 7. Devices

  for (Device device : leap.getDevices()) {
    float deviceHorizontalViewAngle = device.getHorizontalViewAngle();
    float deviceVericalViewAngle = device.getVerticalViewAngle();
    float deviceRange = device.getRange();
  }


  
    // ====================================================

    if(fingerIndexX> 400 && fingerIndexX<1200 && fingerIndexY>0 && fingerIndexY< 500){
    x1=255;
    
      if(alreadyExecuted1 == false) {
    x = x+1; 
    alreadyExecuted1 = true;
   
    }
    if(alreadyExecuted1 == true){
      x = x;
    }
   alreadyExecuted2 = false;
   alreadyExecuted3 = false;
    
  }
   else{x1 = 155;}
  

  if(fingerIndexX> 0 && fingerIndexX<800 && fingerIndexY> 500 && fingerIndexY< 1000 ){
    x2=255;
    if(alreadyExecuted2 == false) {
     x = x+1; 
    alreadyExecuted2 = true;
    }
    if(alreadyExecuted2 == true){
       x = x;
    }
    
   alreadyExecuted1 = false;
   alreadyExecuted3 = false;
   alreadyExecuted4 = false;
   alreadyExecuted5 = false;
    
  }
  else{x2=155;}
  
  

  if(fingerIndexX> 800 && fingerIndexX<800+800 && fingerIndexY>500 && fingerIndexY< 1000){
   x3=255;
    if(alreadyExecuted3 == false) {
   x = x+1; 
    alreadyExecuted3 = true;
    }
    if(alreadyExecuted3 == true){
   x = x;
    }
   alreadyExecuted1 = false;
   alreadyExecuted2 = false;
   alreadyExecuted4 = false;
   alreadyExecuted5 = false;
   }
  else{x3=155;}
  
 
  
  
  // ====================================================
  
  
      
  int sec = second();
  if(sec % 3 == 0 && visited != sec){
    diff = x - Oldx;
    rate = diff;
    Oldx = x  ;
    visited = sec;
    q = q + 1;
   //delay(1001);
  }
    

  // ====================================================
 if(rate == 7.0 || rate == 8.0){
   rate = 7.5;
   s1 = s1 +1;
   freq2 = freq2+1;
 }
 if(rate == 9.0 ||rate == 10.0|| rate == 11.0){
   rate = 10;
   s2 = s2 +1;
   freq3 = freq3+1;
 } 
 if(rate ==0.0 || rate == 4.0 || rate == 6.0 || rate == 5.0){
   rate = 5.0;
   s3 = s3 +1;
   freq1 = freq1+1;
 }
 
 
 realRate = realRate + 0.5*(rate - realRate);
 if(realRate == 7.0 || realRate == 8.0){
   realRate = 8.5;
 }
  
  soundfiles[i].rate(map(realRate,4,11,0.6,1.4));


print("x");
println(x);
print("Oldx");
println(Oldx);
print("realRate");
println(realRate);

// ====================================================
// ====================================================

 targetX += (fingerIndexX - targetX) * 1;
 targetY += (fingerIndexY - targetY) * 1;
 Particle nextParticle = particles[particleIndex];
 nextParticle.reset(targetX, targetY);
    if (++particleIndex >= particles.length) 
    {particleIndex = 0;
}

if(q>=20){
 if(freq1 >= freq2){
   
   if(freq1 >= freq3){
     compare = 1;}
   else if(freq1 < freq3){
     compare = 3;}
}
   
  if(freq1 <= freq2){
    
    if(freq2 >= freq3){
      compare =2;}
    else if(freq2 < freq3){
      compare =3;}
  }
}

  if(compare == 1){
  image(res1,0,0);
  soundfiles[i].rate(0);
  
  }
  if(compare == 2){
  image(res2,0,0);  
  soundfiles[i].rate(0);
}
  if(compare == 3){
  image(res3,0,0);  
  soundfiles[i].rate(0);  
}




}



}



int particleIndex = 0;






class Particle {
  float x = 0.0;
  float y = 0.0;
  float vx = 0.0;
  float vy = 0.0;
  float r = 255;
  float g = 255;
  float b = 255;
  float a = 255;
  color pColor = color(255, 255, 255, 255);
  int life = 0;

  void update() {
    if (this.life > 0) {
    
        this.life--;
        if (this.life < 50) {
            this.x += this.vx;
            this.y += this.vy;
            this.vx += random(4) - 2;
            this.vy += random(4) - 2;
            this.vx *=0.9;
            this.vy *=0.9;
            this.a = 255 * (this.life / 50.0);
        }
    }
  }
  
  void reset(float _x, float _y) {
      this.x = _x;
      this.y = _y;
      this.vx = random(4) - 2;
      this.vy = random(4) * 4 - 2;
      this.life = 55;
      this.g = map(_x, 0, width, 0, 255);
      this.b = map(_y, 0, height, 0, 255);
      this.a = 255;
  }

  void render(){
      fill(color(this.r, this.g, this.b, this.a));
      ellipse(this.x, this.y, 30, 30);
  }

}




 // ====================================================
 // ====================================================
void setupSerial() {
  printArray(processing.serial.Serial.list());
  myPort = new processing.serial.Serial(this, processing.serial.Serial.list()[0], 9600);

  myPort.clear();
  myString = myPort.readStringUntil( 10 );  // 10 = 'n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[3];
}



void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = 'n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == 3) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

Final Project: Monkey Catch(Moon) Mackenzie

Title
  • Final Project: Monkey Catch

Date 13 December 2017

Instructor Moon

Aim/Goal

  • Complete a game design in Processing, encompassing the styles of setting and object-oriented programming
  • Complete a circuit using pressure force sensors and Accelerometer by Arduino to control the bounce of ball and the movement of monkey
Materials needed / used / provided
  • Arduino, breadboard, cables, 220 ohm resistor, pressure force sensor, accelerator

 

Exercises / Experiments

This project consists of four steps. First is to complete the circuit of pressure force sensors and accelerator. Second is to draw two objects controlled by key code, representing monkey and ball by processing. The third step is serial communication, connecting Arduino with Processing and replace controller from key code to sensors.

  1. Arduino

In the first step, I searched online and follow the diagrams below to complete the circuit of pressure force sensor and Accelerometer.

PFS:

Accelerometer:

The circuit is very simple, so I completed like this.

2. Processing
The second step is to draw two objects controlled by key code, representing monkey and ball by processing. Although this project is a combining work of Arduino and Processing. Processing is the key part of the work.
  • There are three lessons I learnt from coding the monkey and the setting. Firstly, how to insert GIF format image. Firstly, I downloaded the GIF library from Google https://extrapixel.github.io/gif-animation/. Then I put the file inside the libraries folder under Processing. Under the guidance of Jowon, I insert GIF into Processing in the way below, which is very similar to insert Image. The only difference is to input GIF example from library first.
Secondly, using keyboard to control the movement of monkey. (UPDATE).
It turns out like this.
Thirdly, how to code Count Down. Firstly, I studied from the example online (https://forum.processing.org/one/topic/how-to-create-countdown-timer.html). Based on the coding learnt from Processing Forum, I created my own code that fits my project as following.
It works in counting down the number. But the problem is that it keeps counting down after zero. So I add t = 0 into the condition of  t<= 0 in order to make the number still in zero. However, it still does not work.
Under the guidance of Professor Moon, I change the way from making it still in zero to make the part of Count Down disappear, by put it in the condition of t > 0, which is also the supplement conditions of t <=0, as following.
I made it!! Plus, since if the player can insist the ball not falling on the ground for 30 seconds, it means he or she wins the game. I set a Winning setting as a consequence.
  • Above is what I learnt from coding the monkey and the setting. The next step is to draw the ball by object-oriented programming. There are three things I learnt from this part.
Firstly, I set a general structure of object-oriented programming, which is like this.
It works pretty good, but the speed is the same along the way rising up and falling down, which is not so realistic. Thanks to Moon’s suggestion of adding the effect of gravity to make the ball bouncing more polished, in the way as following.
It turns out like this (UPDATE video of ball bouncing)
  • Finally, I come to the part of serial communication. In this project, I will use pressure force sensor to bounce the ball and accelerometer to control the movement of monkey.

Starting from PFS, the circuit is shown as the picture below.

The first step is to set up a basic structure of serial communication.
( Sending data by Arduino)
(Receiving data by Processing)
What I learnt most is from how to make the ball bouncing out in control, and bounce out one by one according to pressure. The problem I faced is the ball would automatically bounce out without input the pressure, which can be solved by setting an condition that when input data>0, the ball would run. However, since the code run repeatedly and fast, it would generate many balls at the same time. One solution to solve this problem is to increase the time of delay in Arduino, like this.
However, the sensor become far less sensitive after changing the time of delay, which cause very bad user experience when playing the game. Thank Moon for his suggestion in setting a boolean variable, which is false before initiator. At the initial point of first pressing, in which the previous value equals to zero and the current value is larger than the previous value. The boolean value will change from false to true, and bouncing out the ball.
(In variables)
(In class)
( In draw)
Generally, what I have improved in coding by Processing is organization. Using subtitle as an guidance, it becomes much more clear when refining the work.

Week 14: Around the World in 80 Plates (Cici&Alvin)

Link to our website: http://10.208.8.141/~tx315/eating/intro.html

Project name: Around the World in 80 Plates

Team member: Mostafa

For the final internet art project, we plan to make a fun game to help people release pressure. Images of different kinds of food will fall from the top with certain speed, and the users can use the “left” and “right” key on the keyboard to control the character to eat those food. The users are asked to eat as many food as they can within certain time limits. There are three categories of food, and each has different score points. The users will get their final score when the time is out.

According to Cici’s advice, we create three scenarios/countries, China, Japan, and USA. The users can choose which one they want to play in. For different countries, we use different types of food, different character, different background, and different music to fit the cultural background of that country, so as to let the users better immerse themselves in the game and experience the culture of specific country.

For example, in the Chinese one, the character is from Peking Opera, the background image is a traditional Chinese painting, and the background music is played by a traditional Chinese musical instrument, Pipa.

     

Although our project is mixed with cultural identities, the overall style is still like a cartoon. So we created an introduction page with some cartoon elements and also used animation.

I think we well execute our idea about this eating game. Every single element of our project is specially designed after thorough consideration. I really like our outcome.

 

Final Project: The Breakup Box (Chen & Liu)

Title: The Breakup Box
Partners: Julia Lahmi & Nastasia Kutuyev
Link: [on Google Drive]
Summary: For our digital art project, we decided to create a page that simulates a box of things one might have after the end of a relationship that either belongs to their ex or reminds them of their ex. By interacting with the various elements, users can find out slightly more about what happened with this particular couple, as well as forming their own emotions and opinions about the sentimental objects before they ultimately pack up the box and are faced with the decision of what note to leave things on with their ex.

Continue reading

Week14: Internet Art Project(Ann Chen)-Yijie Zou

Project: Daphne’s Laurel

Source Code: https://github.com/DaKoala/Daphne-s-Laurel

Link: https://dakoala.github.io/Daphne-s-Laurel/

PartnerWenhan(Leah) Dou

Description: This art project tells the Greek myth about Apollo and Daphne. Users can move blocks in the pot and watch the ball bouncing. A flower will blow at the same time when the balls bounce. When a flower blooms completely, users can click the flower to listen a story of Daphne. There are four flowers in total, constituting the whole story about Daphne and Apollo. The bouncing balls symbolize the energy in the earth that nourishes the laurel. The laurel tree symbolizes Daphne because in the Greek myth Daphne in the end becomes a laurel.

Continue reading

CrazyBot: Final Project Documentation

My final project is a chatbot, that operates on the emotional content of the user’s input and outputs a stream-of-thought that is hopefully matching the input in the sentimental way.

The original idea was a chatbot that can summarize the opinions of the Twitter users about a certain topic, however, it wasn’t meant to be, because even streaming 60 megabytes worth of tweets (around 15000 messages) did not provide enough topic-segmented corpus adn while some rare common topics as ‘love’ are relatively popular, with a decent number of tweets, the others have less than 10 sentences at best. Another idea was to receive ebough “Original tweet – Response Pairs” nad then train a recurrent neural network on this labeled data in order to create a ‘general purpose’ chatbot, however, it took too much time to compile a decent number of pairs and therefore it didnt leave enough time for training of the NN.  Therefore a different approach was implemented, taking advantage of the limited number of emotions, that NRC emotional lexicon contains. The tweets were mapped to its emotions and because we lost the ability to effectively drive the topic of the conversation by carefully picking the corpus, the output of the bot became looking more like like a blabber, compiled of different things users on twitter say. There is a solution, of course: inflate the corpus merely by copying topic dictionary several times, however that would only inlude a lot of very similar unnecessary word structures and therefore the corpus that is sent to the generative model is compiled of all the topic-mapped sentences plus a chunk of the shuffled emotion-appropriate tweets. The size of the corpus is limited to 800 because even though large corpus increases the probability of receiving something meaningful in a sentence, it also increases the computation time dramatically.

HINT: giving more concrete topics to the generator improves the results.

***

The topics are extracted using the StanfordDependencyParser one needs to change the path to the package manually on line 41, for example: path = ‘C:\\Program Files (x86)\\Stanford Parser\\stanford-corenlp-full-2017-06-09\\

***

After the corpus is compiled, it is passed through markov-chain-inspired class TextTools, which tokenizes the tweets by sentences, splits the sentences by stopwords as well as punctuation(but leaves the punctuation in the corpus), then pos-tags those chunks (phrases), which are not the string of stop-words and applies a markov chain on the resulting corpus, that consists of sequences of POS-tagged phrases and stopwords. It is important to note, that the length of the POS-tagged sequence is manualy limited to 4 words at most, therefore it can be said, that an N-gram of 4 is used here, however that would not be precise, because uninterupted POS-sequences of 4 and more words are rare and usually indicate a misspelling, that is not tracked by the tools I am using. Also, the n-gram may be controlled on line 729.

Afterwards, the sequence is pretty much the same to my Midterm poject. In short, the ‘mask’ of a sentence is constructed using the common occurences in the corpus and then encoded into natural language. Out of multiple generated sentences only a limited amount is chosen by random sentimental analysis. With probability of 60% the most negative sentence would be added to the final chain. In fact, the variables of n-gram and the size of a mask can be controlled on line 715.

An unexpected difficulty was representing the emoji characters in the Tkinter GUI, because TKinter does not support UTF-16, therefore the characters beyond the range of UTF-8 need to be converted somehow, which is done by extracting the character’s unicode data and plugging it in the string, converting it to ascii.

Overall, the bot shows relatively good comprehensiveness and sometimes peculiar sentences, but as I mentioned earlier, small relation to the topic is explained by small corpus, which is also a limitation for using the Recurrent Neural Network-based model.  Due to the same reasons the output seems more like a stream of thought rather than an actual thing a human would say. There is definitely room for improvement in the generative section, for example, a large enough neural networkwould be way more effective.

SAMPLE OUTPUT:

CrazyBo0_0t:
Hello, I am sliiiightly crazy…hah…Thats because I looked at too many tweets and now my brain is fried by you, humans!!! My favorite topics are….I dont remember, but I actually don’t give a damn what humans think anymore LOL LOL

 

Human: People are writing pleas to the state of Alabama on the day of its Senate race.

CrazyBo0_0t: Learned for die. You more hope you, have a pretty no democratic party activist are so honor Has is night with the hug him and know you more love until 274 used Boy was scoring my hate you have the kgb wanted by his day my given day with a kid”. Have a horrible manager at of you, hope someday to debut you his others.

 

Human: Thinking big takes the same amount of effort as thinking small

CrazyBo0_0t: Absolutely love waiting in my love is #bills game. You’re and love to office tells to love y’all in the life. How lovable and i i it’s not omg you run we, use is we’d love he better be past. Fucking love i socialist dhs over 17th birthday is in um excuse in 1. He was beautiful voice but counter attacking united will he’s me, and then hope harry so top websites at all. Awesome on the trust your martial. Embarrassingly negative.

 

Human: so what was that lousy thing, that trump said about jerusalem and palestine?

CrazyBo0_0t: In this january. City, your kim tensions. S you have a in a love you new purchases make what sometimes a egomaniacal narcissist to have #manchesterderby. I still like it is love to what always good of this love we can sad it days you and jordanian see it from in our ultimate compliment. Love and iss. To be once You come me too brainwash and didn’t heard so worried about. Happy birthday princess. From the had a meal you so wrong of the Guy it, but fergie but egomaniacal narcissist i will not love someone up the look you christmas now cause this [HEAVY BLACK HEART][?]. Then break been much your best players on a good people[HEAVY BLACK HEART][?]. Hope u like it actively targets since no other season you have a gotten lucky there”.

 

Human: songs are usually written about north korea and other puzzling things

CrazyBo0_0t: Make Will war no things. ! You know With baby to love of the because decent tools their smiles. Love &lt;333333 of love [YELLOW HEART]. [TWO HEARTS]. Snowy sunday afternoo…… He members how ure one in hurt participants and wasnt and special place who things their Dire an goal difference and you them drainining I love to love you members you things). To kenya, we him.

Final Project – Sharing an Umbrella: A Story on Generosity

Partners: 

Ines, Haitian, Sheryl, Carol

Materials: 

  • Bluetooth (HC05) (2)
  • Arduino (2)
  • Distance Sensor (1)
  • Umbrella
  • Earmuffs
  • 12V Battery
  • 9V Battery

Tools:  

  • Soldering Iron
  • Wire Cutters/Strippers

Equipment:

  • Wire
  • Solder
  • Duct Tape
  • Fabric
  • Threat and needle and fabric
  • Newspaper

Brainstorming:

Carol had the idea of incorporating an umbrella into our final project. After almost two days of brainstorming to get a feasible idea we all enjoyed we came up with this performance work. None of us have ever made a project strictly to be performed and it gave us a little more room to be creative and artistic so we went for it!

The Concept: 

The performance idea we had in mind looks a little like this:

It’s a cold and rainy day. Person A stands in the foreground waiting for a taxi, uncovered as the rain hits their shivering body, pressing their earmuffs close to their head as it is the only part of their body they can force warmth onto. We can hear the sound of pouring rain. Person B comes up in the background – one of the fortunate few who bears an umbrella. It’s dark, their shape is only a vague silhouette. We see them approach Person A. It seems as though they will walk pass the less fortunate one but instead their pace slows. The umbrella is placed to cover both individuals and in strange celebration lights on the umbrella are ignited to brighten the gloomy night and the sounds of birds chirping emerge.

Application to the Requirements: 

The requirements of this project are to create an interaction with a micro-controller between two Arduinos and for this to somehow be visualized. On top of the earmuffs we chose to place a distance sensor (we initially had the idea of using a moisture sensor as well but the time constriction did not allow this). By being placed directly on top of the earmuffs, over one’s head, the distance sensor would sense the umbrella as it is moved over the individuals head. This would trigger two things. The most simple would be the lights on the umbrella would turn on. A little more complicated is our desire to include sound. We want this interaction to change the already existing sound of raindrops falling to raindrops hitting an umbrella. A subtle but beautiful nod to the act of sharing an umbrella.

Separating the Work: 

In order to play towards everyone’s strengths, we formed teams. Haitian and I, Sheryl and Carol. While Carol and Sheryl worked on the code, Haitian and I got to the design. We first sketched out ideas. Once we had solidified the interaction with everyone, Haitian and I got to making the lights on the umbrella.

The Code: 

The code we needed to create for the project to work with bluetooth was a code that went from Arduino to processing, the processing code, and then from processing to Arduino. The first code would be uploaded to the Arduino in the earmuffs and the second to the Arduino in the umbrella.

The Umbrella: 

We wanted to make the lights on the umbrella wireless so the option of LED strips were ruled out. Using christmas lights was also ruled out because we weren’t sure how the wire would look once we stripped it and if we could even find the correct voltage. So, we decided to stick to LED lights. We chose bright white LED lights. There are 8 ends to the umbrella and we put an LED on each tip. We had to use a 12V battery, transistor, and the Arduino. Because we didn’t want to connect each LED to an input and have too many wires going through the umbrella, we wanted to connect each LED to each other and then to the Arduino. We quickly found out that this would not work because the voltage wasn’t strong enough. We decided to use a 12V battery but then also needed a transistor. Everything worked on the breadboard (image below). So we got to transferring it to the umbrella.

Once on the umbrella we realized that because the LEDs were in a series, there was progressively less light shining through each one. So, we had to re-evaluate. With the help of Nick, I used LED wizard to input the source voltage, diode forward voltage, mA, and #s of LEDs to figure out what the problem was and what the array should be. Turns out, somewhere along the way we miscalculated the diode forward voltage. Instead of having 2 series of 4 LEDs, we should have either 4 series of 2 LEDs with resistors, or 2 series 3 and 1 series of 2 with respective resistors. (screenshots below)。Instead of risking problems that could occur with this, we chose to simply make each LED go to its own output because the umbrella is relatively strong to hold  the wires. We spent some time fixing this and the circuit worked. We just used 8 individual LEDs that connected all to their own resistor then to each other and then ground. The positive simply went to each individual input on the Arduino. To check the circuit, Sheryl prototyped a code and everything worked.

 (LEDs in a series not lighting up properly) 

(black is ground, blue is input)

All that is left with the umbrella is to connect the bluetooth to the Arduino. For this we only have to connect it to ground, power, and two inputs (RX and TX) on the Arduino. We simply soldered everything from the Bluetooth’s leads to wire to the Arduino. We used a protective case around the bluetooth’s leads to make sure that we didn’t damage the device by soldering directly to it.

Cleaning up the Umbrella 

I hot glued every end of the LED to the ends of the umbrella to secure them. I also wrapped the wires around the rods in the umbrella and secured in various spots with duct tape. This made sure none of the wire was visible from the outside because they were disguised inside the umbrella rods. It also secured everything so that there were no free flowing wires. I taped down the Arduino  with duct tape. Because the battery is extremely heavy I taped it to the handle to make sure it didn’t break/weigh down the umbrella.

Moving on to the Earmuffs 

The breadboard for the earmuffs was pretty simple. We had to transfer both bluetooth and a distance sensor to earmuffs. Here is the initial sketch we had planned:

 

(we ended up getting rid of the moisture sensor and using Arduino not Beatle)

First, we wanted to make sure everything works with the Arduino and later think of transferring everything to a smaller Arduino. The earmuffs we purchased were pocketed with a knit fabric on the ear so if needed we can hide the Arduino in with the stuffing. The distance sensor would be placed on top of the headband for the earmuffs to eventually detect the umbrella. We planned to place the bluetooth next to the Arduino to minimize wiring. It works exactly the same as the one on the umbrella. The Bluetooth works the same on the earmuffs as it did on the umbrella. There are two inputs (RX and TX) that need to connect to the Arduino along with Power and Ground.

Progress stalled by problems with coding/distance sensor 

Before we could transfer anything onto the earmuffs we had to make sure the Arduino on the umbrella and the one we were going to put on the earmuffs are communicating with each other. Unfortunately, the distance sensor wasn’t sensing anything. The value it read was stuck at 255. It took a while to alter the code and see what was wrong. Carol ended up writing a new code that worked but the distance sensor was still extremely unreliable. A fellow ended up telling us that the reason why this was happening is that the distance sensor reads a value even when nothing is in front of it. We needed to consider this when writing the code.

Fixed! Now to the Earmuffs…

Haitian and I worked on putting the circuit onto the earmuffs. We put the Arduino in one of the ears along with the bluetooth. We then placed the distance sensor on the top of the headband and used wire along the headband to go down to the ear where the Arduino was located. The project looked like this at this stage:

To make it wirelessly powered we used a 9V battery which we just soldered to a power and ground we had going from the Arduino. The wires went underneath and across the headband not to interfere with the wiring that was on top of the headband. This battery is just placed in the other ear pocket!

Note: The items that are placed in the earmuff pockets do not touch the user because they are on the outside not near the ear!

To make everything cleaner, we wanted to place a fabric over the wiring on the headband. Everything on the earmuffs ear looked pretty neat so we left that alone. With more time, we would have moved to a smaller Arduino such as the Beatle or Lilypad. We encountered some difficulty soldering to the distance sensor and the bluetooth because of the tiny leads but after several tries we were eventually successful. Haitian wanted to secure the distance sensor onto the earmuffs before covering it. To make a protective wing she placed crumpled up newspaper underneath of where the leads were and then covered it with two fabrics that were then sewed together.

Testing out the interaction! 

We tried out the interaction several times before getting to the performance work. We encountered some problems, needing to change trig and echo to each have their own inputs instead of sharing one. We also needed to change the 9V battery which could have been the root cause for why the distance sensor wasn’t working. The soldering on the bluetooth kept on disconnecting so Haitian and I had to re-solder it back together several times. Eventually we were able to get through these minor problems and adjusted the code to finally make everything work. Sheryl adjusted the code to make the lights twinkle as well.

The Whole Shebang: 

We performed the piece in one of the IMA rooms with the projector on to set the scene of a rainy day. All lights were off and the blinds closed to emphasize the lights on the umbrella and the spotlight on person B. I edited the video on iMovie into black and white and doubled up the sound. Here is the whole thing put together 🙂 (play in 1080p)!!

Generative Language: Final Project

The idea behind my final project was to generate stereotypical news articles in the styles of liberal and conservative news sources. I wanted to look more carefully into how articles from different sources vary from each other when they address the same topic, and then try to emphasize and recreate these differences using some of the techniques we’ve learnt in class this semester. One of my main reasons for choosing this project is the great amount of speculation about the biases of major news outlets, as well as the existence of ideological “bubbles” within which most people tend to find themselves when it comes to news.

I began the project by collecting my source material. Since I wanted to focus on a widely covered, relatively recent event, I picked this year’s Las Vegas Shooting as my topic, and CNN and Fox as my two news sources (representative of liberal and conservative outlets). In order to minimize bias, I selected the first 10 news articles posted by each site on this topic (chronologically since the occurrence of the event). This should minimize any form of bias on my part in choosing the articles, and, as the results show, may have even been “too objective” of a method. I saved the HTML files of all of these articles, and then started inspecting the source code to figure out how to best extract the information. Once I had that part figured out, I used BeautifulSoup to extract and aggregate the content coming from the 20 articles I chose.

The next step in my project was to create a Markov Chain generator. I created a rather basic one, running at word-level, with a length of 3, but with special attention to punctuation. The results proved to be acceptable, though not amazing. Punctuation, in particular, required the most tweaking, as quotation marks, for example, were difficult to match and often looked out of place. Overall, however, the articles did seem to resemble real news.

One thing I knew from the beginning was that the effectiveness of this project relies a lot on presentation. Simply showing/printing the plain text versions of these articles would probably fail to drive any point across, so I wanted to present the results in the form of news articles. My initial plan was to generate a static website based on a template I would create from the news websites my source material came from (CNN and FOX). However, the huge number of resources used, as well as the sheer amount of unrelated information on each page made me believe that such a template would be too distracting, and the result would not be very powerful. I then switched my idea towards replicating a printed news article instead. In order to do so, I needed to find/make a template still, but this time a considerably simpler one.

After doing some research, I decided to use LaTeX, since it perfectly fit the idea of templating that I needed. Although I am rather inexperienced with it, I did manage to find a New York Times template on the internet, and after a few hours of fixing errors and customizing it, I managed to get it working. I then set up the barebones template that I needed, while keeping in mind some special formatting I would apply later, and integrated the LaTeX commands into my Python script. Now, after I generate a new article using Markov chains, I open the corresponding (liberal/conservative) TeX template, fill in the contents of the article, and then export the resulting document as a PDF.

In order to both emphasize some of the distinctive characters of each article, and also make the results look more realistic, I also wrote functions that determine the most important sentences and the most prevalent unique words in each article. I then used the top sentences as the headlines of my articles, and highlighted the recurrent words in order to better show the differences in vocabularies between the two types of articles.

To further display the generic aspect of the resulting articles, as well as the fact that the same narratives tend to be employed in similar cases by the same news companies, I decided to generate a large number of articles, each using the name of a different city instead of Las Vegas, so that the results are now purely fictitious, yet (hopefully) representative of how liberal and conservative news outlets each portray such events.

As I hinted earlier in this blog post, the one main aspect of this project which I may consider a “failure” is the fact that the differences between the two articles are not immediately obvious. Neither the headlines, nor the most used words reflect anything regarding the political orientation of the source texts, although on closer reading one may be able to identify the different approaches in telling the same stories. This made me realize two things: 1) the news immediately following such a tragedy tends to be focused on the actual event, on the facts, meaning that there is little political discourse involved and a very small amount of bias — something we should credit these news outlets for. I suspect the situation would be different if I had chosen my articles in a different manner. 2) The differences between liberal and conservative articles tend not to lie in the topics approached, or even in the vocabulary used, but rather in the way the dialogue is structured and the arguments formed, which is much more difficult for a machine to understand and recreate. Each news outlet appears to be very aware of the criticisms it faces, as well as of the narrative the other sources will try to create, and thus they tend to have proactive approaches. For example, Fox News articles brought up gun regulations and mentioned what liberals’ thoughts are, although the purpose was not to actually represent their beliefs. When we apply Markov Chains or look at the articles as “bags of words”, however, these nuances are immediately lost.

Nevertheless, this was a very informative, interesting project, and I do hope that the results (whether they match my initial expectations or not) can and do prove a point. The most exciting part will be to hear what people believe this point to be.

Here is a sample article (without any post-editing or formatting): Liberal-Times-Nr.-6