Final-Interaction Lab,Rudi’s session

  • Date: 12.17.2017 
  • Instructor: Rudi
  • Partner: Fred Qian
  • Documented by: Yian Zhang (Ian)
  • Course: Interaction Lab
  • Thanks for help: Luis,Nick,Leon,Jack, Nimrah

Project name: Under The Sea : An adventure of Superman

Brief Introduction

For our last project in Interaction Lab, I cooperated with Fred to make an interactive and immerse experience project. In the project, the user will experience a journey of Superman from a first-person perspective. The user’s different actions will result in different reactions of the computer; the computer will also, by changing the images showed on the screen, encourage the user’s further reactions. Actions and reactions, this is what interaction is basically about.

A projector was used and a screen curtain was made to provide a more authentic experience. A Kinect was used to detect the movement of the user. Other components like a potentiometer, a bubble machine, a fan were used to provide various interaction effects.

 

How we came up with the idea?

In the very beginning, Fred and I were impressed by the light matrix Rudi showed us in class. By swinging up and down, changing the color and brightness of the lights, the light matrix could express to the audience a lot of information, while offering wonderful visual enjoyment. We were thinking of how we could develop a way by which the users can also express much information to the light matrix.

 

I remembered a reading called “A Brief Rant On The Future of Interaction Design” by Bret Victor. He argued that the future of interaction should not be limited to the surface of our phones. Our hand can do a lot more than 2D movements. So, we decided that gestures of the hands were the way the users communicate information to the computer in this project. This means we need wireless sensors that can detect the body language of the users. This was why we chose Kinect.

 

We wanted to create a theater with the light matrix hung above the stage. When a hand is below a light in the matrix, this certain light will rise and change its color. Other stage effects like sound and background could be controlled with other gestures.

However, our ideas conflicted here. Fred wanted the user to be an actor/actress but I thought the user should be the director. After a very long discussion, we kept exchanging our ideas and finally reached an agreement. We decided that the user is neither the actor nor the director. It is—— an adventurer who experiences a fancy journey! The user is no longer the director: The storyline is pre-set. The user is no longer the actor either—— he is not giving a performance to others. Instead, he himself is an audience enjoying the beauty and fun of the journey. However, we found the light matrix was not that necessary if the story no longer happened in the theater. We decided to give up the idea of the light matrix although we were still grateful for its inspiration.

Plots

1. If the user reaches his arms forward as the Superman does, he will “fly“ like a superman.

2. Meantime, the fan will work to create some “sea breeze”

3. If the player jumps and squats, he will be able to dive into the deep sea.

4. Once the user gets underwater, the bubble machine will work to create some bubbles so that it feels like the user is really diving.

 

5. The fishes will follow the right hand of the user. It seems that they are playing with the user.

6. Then the user will lose control of the fishes. They will swim around the doorknob, suggesting the user should open the door.

7. If the user turns the doorknob to open the door, he will enter the palace and find the little mermaid. The journey ends here.

Work diary

Although Fred was mainly responsible for physical setting and I took charge of coding plus interactions and plot designing, we worked with each other very closely so that we both understood all the work very well.

  1. First, we designed the environment for the project. We painted several sketches and discussed the interactions involved. We put a curtain screen in front of the user, with a Kinect camera hung above. Fred raised a brilliant point that we should put the projector on the back of the screen so that the user will not block the projector. We also had a general idea of our story (see plots below).

 

 

2. Then we came to the coding part. We found two marvelous sketches on Openprocessing.org and made some changes to it. One was the beautiful 3D effect of the ocean. We change the perspective of the user, the color of the sun and the sea, and we made the zooming effect. The other was the moving fishes sketch. We put it into the background of deep sea and made it track the movement of the hand.

Original                                                                      Our version

Original                                                                     Our version

Then we added a third scene —— the palace of the mermaid and combined three scenes together, which was a backbreaking work because the variables and loops of the three sketches interfered with one another. We also added some transition animations to combine them together.

 

3. After finishing the code, we came to the physical part.

-We literally made our own screen curtain! We connected two pieces of white fabric together.

-We then used some poles together with the curtain to make the screen.

-We bought a bubble machine, broke it and connect it to the arduino board.

-We 3D printed a doorknob and connected it to the potentiometer.

-We got and learned how to use Kinect and the projector.

 

-Finally, at one night, we combined all the parts together and put it on the 2nd floor of the academic building!!!!! 

 

-After that, we added some user manuals to the project to make it learnable.

“Open the program, hold up your arms and do like what Superman does”(wind)

“Wait until you are close enough to the sun and stop flying, jump and dive”(bubbles)

“Use your hands to play and interact with the fish, let them tracks you”

“Follow the fish and twist the doorknob”

“Enjoy”

 

The End of the Semester Show 

Since projectors were banned in the show, we filmed a video of our project and made a simplified version of the computer screen to show both of them to the audience.

 

Here is the video of the COMPLETE VERSION of our project.

 

Critical Techniques

  1. Kinect. It was a critical sensor used in the project. It can precisely detect the position and movement of every joint of the user.
  2. Projector. It was the main way we communicate with the user. It can provide an authentic immersive experience.
  3. P3D. This helps us create authentic animations with Processing.
  4. Tabs and Classes in Processing. These techniques made our sketches much clearer so debug was made much easier.
  5. Virtual canvas (Pgraphics) This helped me combine 3D canvas and a 2D canvas together. Thank Luis for help!
  6. Soldering. We used it to remold many of the components like the bubble machine and the fan. This technique helped us enrich the interactions.
  7. Sewing. We used it to make our screen! Thank Nimrah for teaching!
  8. 3d printing. We made our doorknob with it! I hope I can use this fascinating technology more in the future. Thank Jiwon and Jack for guiding!

Lessons learned

  1. Communication was more than important. When your idea conflicted with your partner’s, be patient. Keep exchanging ideas and feelings and you will most likely be able to develop a new idea that is better than both.
  2. Choose and abandon ideas. For the effect of the whole project. you have to give up some really fancy ideas. For example, we gave up the cool light matrix because it was only as cool in the theater or other indoor environments. We gave up the idea of importing a video because it conflicted with another part of our code and we could not figure it out how to solve it in a short time.
  3. Details mattered. Sometimes detail mistakes are dangerous. For example, a missing “}” in Processing may make you debug for 3 hours. For example, a little mistake made in the soldering part may break the whole circuit.
  4. Time Management is important. We did not finish our project before the day of the show. Luckily we finally made it in time but more time should be left in case of emergency.
  5. When debugging, split the code into small pieces and check them one by one, by commenting off other parts. This was a very useful skill that has saved me a lot of time.

Improvements to be made

  1. More stories and interactions could happen after the Superman found the little mermaid.
  2. Introduce a narrator so that the project is more learnable.
  3. Redesign and remake the circuit. Currently, the bubble machine and the fan are very unstable.
  4. Try to make it fullscreen while keeping the fluency of the animation
  5. Introduce more interactions
  6. Probably come to the beginning: Rethink of the light matrix?

Feelings

The project is rather tiring especially when we had to make a screen by ourselves! However, the sense of achievement made me feel all the hard work was worthwhile. Cooperation with Fred was a very cool experience, too.

In this semester, Interaction Lab left me with a very deep impression and allowed me an insight into the fancy world of IMA. I will leave IMA temporarily since I do not have IMA lessons next semester, but I hope we can meet again in the future!

References

https://www.openprocessing.org/sketch/428956

https://www.openprocessing.org/sketch/156580

Code

(Arduino code is not listed. It is no other than serial communication.)

 

 

 

//3D Terrain Generation - Adam Vozzo
 //import processing.video.*;
//Movie myMovie;
PImage image,image2;
int theater=0;
 boolean sit = false;
 String message1 = "Superman, welcome to my palace!" ;
 String message2 = "Superman, soar!";
 String message3 = "Squat to dive!";
  //String message4 = "please give me your applause!";
PImage door,mountain;
 PGraphics underwaterz;
 bubble[] bubbleArray = new bubble[100];
  int speed1=1;
     int  speed2=-1;
int z = 0;
int t=0;
int prevt=0;
int cols, rows;
int scl = 20; //Scale of waves
int m;
boolean push;
//fill view with terrain
int w =6000 ; // width of sea
int h = 8000; //  height of sea

float flying = 0; //the speed at which the noise generation is moved

float [][] terrain; //2d array to make the grid
float stopY,handY;
boolean dive=false;
//Colours
color c1 = color(35, 205, 219); //strip fill
color c2 = color(22, 57, 180); //strip stroke

//To aid colour variability
float ca = 0; 
float shoulderY,handX;
//scanline thickness
int t1;
//Sun Rotation
float r1 = 0, sunrise=800;
float perspective= PI/2;
boolean applause=false;
import processing.serial.*;


Serial myPort;
int valueFromArduino;


void setup() {
  textSize(40);
  image = loadImage("theater2.jpg");
  image2 = loadImage("little mermaid.png");

   kinect = new KinectPV2(this);

  kinect.enableSkeletonColorMap(true);
  kinect.enableColorImg(true);

  kinect.init();
  door=loadImage("door.png");
  mountain=loadImage("mountain.png");
  size(1920,1080,P3D);
  //fullScreen(P3D);
  frameRate(120);
  colorMode(HSB, 360); //HSB to have better control over the brightness of colours, and to smoothly transition the background
  smooth();

  //The size of the grid
  cols = w / scl;
  rows = h / scl;

  terrain = new float [cols] [rows];

  t1 = 10; //Decide the thickness of scanlines
  
  for (int i=0; i< bubbleArray.length;i++){
   bubbleArray[i]= new bubble((int)random(100,300),(int)random(height+10,height+1000),(int)random(15,55));
 }
 bouncers = new ArrayList();

  for (int i = 0; i < 200; i++)
  {
    Mover m = new Mover();
    bouncers.add (m);
  } 
  frameRate (30);
  underwaterz = createGraphics(width,height);
 printArray(Serial.list());
  // this prints out the list of all available serial ports on your computer.
   myPort = new Serial(this, Serial.list()[ 0 ], 9600);
  // WARNING!
 
}

void draw() {
  if(abs(trackY-handY)<150 &&  abs(shoulderY-trackY)<200){push=true;}else{push=false;}
  //println(trackY-shoulderY);
   kinect();
  if(t>-999){
  translate(0,t,0);
translate(0,0,z);
//print(mouseX); print(","); println (mouseY);
 if(prevt-t==50){m=millis();}
 if (z>2000&&z<2500){stopY=headY;}
if (z<3000&& push==true){
z=z+50;
text(message2, 800, 400);
//println(z);
}
if( z==3000){text(message3, 850, 400);}
if(z==3000 && headY-stopY>300){dive=true;}
if(dive==true &&t<200 && prevt<=t){prevt=t;t+=20;}
if(dive==true&&t>-1000){prevt=t;t-=50;sunrise-=50;}
//else if(h<4000){h++;println(h);}
//else if (perspective> PI*0.49){perspective=perspective-PI/300;}
//print(perspective);
//print(",");
//println(z);
  //Lights coming from different angles to achieve desired lighting
  pointLight(255, 255, 255, -width, -height, -width);
  pointLight(255, 255, 255, width, height, width);

  //Sunlight reflection
  //the combination of these spotlights increases the intensity of the light closer to the sun
  spotLight(50, 30, 60, width/6.6, -height-300, -500, width/5, height, -100, PI/10, 3); //Sun Reflection off water
  spotLight(50, 300, 600, width/6.6, -height, -200, width/5, height-400, -400, PI/2, 3); //brightens the ocean and sun

  //println(frameRate); //to analyse what slows the sketch and make it more efficient

  //smooth colour transition background
  if (ca > 360) {
    ca = 0;
  } else {
    ca += 1;
  }
  color c6 = color(ca, 200, 200);
  background(#ADD6F5);

  //rotation speed and creation of the sun
  r1 += 0.008;
  sunrise=100;
  sun(r1,sunrise);

  //calculating the movement of the grid
  flying -= 0.02;
  float yoff = flying; //y offset
  for (int y = 0; y < rows; y++) {
    float xoff = 0; //x offset
    for (int x = 0; x < cols; x++) {
      terrain [x][y] = map(noise(xoff, yoff), 0, 1, -150, 130); //smaller mapping of noise, lower waves
      xoff += 0.02; //smaller the value, more precise noise calculation
    }
    yoff+= 0.09; //smaller the value, more precise noise calculation
  }

  //Matrix to stop the sun and scanlines being affected by the moevement here
  //Defining the properties of the grid
  pushMatrix();
  strokeWeight(0); 
  stroke(c2);
  noFill();
  fill(c1);
  translate (width/2, height/2+200);
  rotateX(perspective); //60 degrees, flyover perspective

  translate(-w/2, -h/2); //centers the triangle strip

  //need to consistently adjust the Z-axis of the grid, not just random
  //achieved using the noise above
  //Nested loop to determine grid verticies
  for (int y = 0; y < rows-1; y++) {
    beginShape(TRIANGLE_STRIP);
    for (int x = 0; x < cols; x++) {
      vertex(x*scl, y*scl, terrain [x] [y]);
      vertex(x*scl, (y+1)*scl, terrain [x] [y+1]);
    }
    endShape();
  } 
  popMatrix();

  //Layer lines over the grid and Sun to look like a CRT monitor
  scanlines(t1);
 
  
  pushMatrix();
  translate(0,1000,-3000);
  underwater2();
  popMatrix();
  //println(headY-stopY);
  }
 
else {underwater();}
//println(t);
 /*if (millis()-m<10000) {
    myPort.write('H');
  }
  else {
    myPort.write('L');
  }*/
}
class bubble{
   int s1;
   int s3= 2;
   int s2;
  int x, y;
  float ex,ey;
  
  bubble (int x, int y, int z){
ex=x;
ey=y;
s1=z;
s2=s1-9;
}
void display(){
underwaterz.fill(255, 10);
  underwaterz.stroke(255);
  underwaterz.strokeWeight(0.5);
  underwaterz.ellipse (ex, ey, s1, s1);
//  ey--;
  underwaterz.noFill();
  underwaterz.strokeWeight(s3);
  underwaterz.arc(ex, ey, s2, s2, radians(200), radians(260)); 
  underwaterz.arc(ex, ey, s2, s2, radians(300), radians(310));
  //println(s2);
}

}
ArrayList <Mover> bouncers;
float trackX,trackY;
float headX,headY;
float bottomX,bottomY;
int bewegungsModus = 3;
class Mover
{
  PVector Fdirection;
  PVector Flocation;

  float Fspeed;
  float FSPEED;

  float FnoiseScale;
  float FnoiseStrength;
  float FforceStrength;

  float FellipseSize;
  
  color Fc;


  Mover () // Konstruktor = setup der Mover Klasse
  {
    setRandomValues();
  }

  Mover (float Fx, float Fy) // Konstruktor = setup der Mover Klasse
  {
    setRandomValues ();
  }

  // SET ---------------------------

  void setRandomValues ()
  {
    Flocation = new PVector (random (width), random (height));
    FellipseSize = random (4, 15);

    float Fangle = random (TWO_PI);
    Fdirection = new PVector (cos (Fangle), sin (Fangle));

    Fspeed = random (4, 7);
    FSPEED = Fspeed;
    FnoiseScale = 80;
    FnoiseStrength = 1;
    FforceStrength = random (0.1, 0.2);
    
    setRandomColor();
  }

  void setRandomColor ()
  {
    int colorDice = (int) random (4);

    if (colorDice == 0) Fc = #ffedbc;
    else if (colorDice == 1) Fc = #A75265;
    else if (colorDice == 2) Fc = #ec7263;
    else Fc = #febe7e;
  }

  // GENEREL ------------------------------

  void update ()
  {
    update (0);
  }

  void update (int Fmode){
  
    if (Fmode == 3) // seek
    {
      Fspeed = FSPEED * 0.7;
      if(millis()-m<50000){
     seek (trackX, trackY);}
      else
   {seek(1440,760);}
      move();
    }
    Fdisplay();
  }

  // FLOCK ------------------------------

  void flock (ArrayList <Mover> boids)
  {

    PVector Fother;
    float FotherSize ;

    PVector FcohesionSum = new PVector (0, 0);
    float FcohesionCount = 0;

    PVector FseperationSum = new PVector (0, 0);
    float FseperationCount = 0;

    PVector FalignSum = new PVector (0, 0);
    float FspeedSum = 0;
    float FalignCount = 0;

    for (int Fi = 0; Fi < boids.size(); Fi++)
    {
      Fother = boids.get(Fi).Flocation;
      FotherSize = boids.get(Fi).FellipseSize;

      float Fdistance = PVector.dist (Fother, Flocation);


      if (Fdistance > 0 && Fdistance <70) //align + cohesion
      {
        FcohesionSum.add (Fother);
       FcohesionCount++;

        FalignSum.add (boids.get(Fi).Fdirection);
        FspeedSum += boids.get(Fi).Fspeed;
        FalignCount++;
      }

      if (Fdistance > 0 && Fdistance < (FellipseSize+FotherSize)*1.2) // seperate bei collision
      {
        float Fangle = atan2 (Flocation.y-Fother.y, Flocation.x-Fother.x);

        FseperationSum.add (cos (Fangle), sin (Fangle), 0);
        FseperationCount++;
      }

      if (FalignCount > 8 && FseperationCount > 12) break;
    }

    // cohesion: bewege dich in die Mitte deiner Nachbarn
    // seperation: renne nicht in andere hinein
    // align: bewege dich in die Richtung deiner Nachbarn

    if (FcohesionCount > 0)
    {
      FcohesionSum.div (FcohesionCount);
      cohesion (FcohesionSum, 1);
    }

    if (FalignCount > 0)
    {
      FspeedSum /= FalignCount;
      FalignSum.div (FalignCount);
      align (FalignSum, FspeedSum, 1.3);
    }

    if (FseperationCount > 0)
    {
      FseperationSum.div (FseperationCount);
      seperation (FseperationSum, 2);
    }
  }

  void cohesion (PVector Fforce, float Fstrength)
  {
    steer (Fforce.x, Fforce.y, Fstrength);
  }

  void seperation (PVector Fforce, float Fstrength)
  {
    Fforce.limit (Fstrength*FforceStrength);

    Fdirection.add (Fforce);
    Fdirection.normalize();

    Fspeed *= 1.1;
    Fspeed = constrain (Fspeed, 0, FSPEED * 1.5);
  }

  void align (PVector Fforce, float FforceSpeed, float Fstrength)
  {
    Fspeed = lerp (Fspeed, FforceSpeed, Fstrength*FforceStrength);

    Fforce.normalize();
    Fforce.mult (Fstrength*FforceStrength);

    Fdirection.add (Fforce);
    Fdirection.normalize();
  }

  // HOW TO MOVE ----------------------------

  void steer (float Fx, float Fy)
  {
    steer (Fx, Fy, 1);
  }

  void steer (float Fx, float Fy, float Fstrength)
  {

    float Fangle = atan2 (Fy-Flocation.y, Fx -Flocation.x);

    PVector Fforce = new PVector (cos (Fangle), sin (Fangle));
    Fforce.mult (FforceStrength * Fstrength);

    Fdirection.add (Fforce);
    Fdirection.normalize();

    float FcurrentDistance = dist (Fx, Fy, Flocation.x, Flocation.y);

    if (FcurrentDistance < 70)
    {
      Fspeed = map (FcurrentDistance, 0, 70, 0, FSPEED);
    }
    else Fspeed = FSPEED;
  }

  void seek (float Fx, float Fy)
  {
    seek (Fx, Fy, 1);
  }

  void seek (float Fx, float Fy, float Fstrength)
  {

    float Fangle = atan2 (Fy-Flocation.y, Fx -Flocation.x);

    PVector Fforce = new PVector (cos (Fangle), sin (Fangle));
    Fforce.mult (FforceStrength * Fstrength);

    Fdirection.add (Fforce);
    Fdirection.normalize();
  }

  

  // MOVE -----------------------------------------

  void move ()
  {

    PVector Fvelocity = Fdirection.get();
    Fvelocity.mult (Fspeed);
    Flocation.add (Fvelocity);
  }

 
  // DISPLAY ---------------------------------------------------------------

  void Fdisplay ()
  {
  underwaterz.noStroke();
    underwaterz.fill (Fc);
    underwaterz.ellipse (Flocation.x, Flocation.y, FellipseSize, FellipseSize);
  }
}
import KinectPV2.KJoint;
import KinectPV2.*;

KinectPV2 kinect;




void kinect(){
  
  
  ArrayList<KSkeleton> skeletonArray =  kinect.getSkeletonColorMap();

  //individual JOINTS
  for (int i = 0; i < skeletonArray.size(); i++) {
    KSkeleton skeleton = (KSkeleton) skeletonArray.get(i);
    if (skeleton.isTracked()) {
      KJoint[] joints = skeleton.getJoints();

      color col  = skeleton.getIndexColor();
      fill(col);
      stroke(col);
      drawBody(joints);

      //draw different color for each hand state
      //drawHandState(joints[KinectPV2.JointType_HandRight]);
      //drawHandState(joints[KinectPV2.JointType_HandLeft]);
      trackX=joints[ KinectPV2.JointType_HandTipRight].getX();
trackY=joints[ KinectPV2.JointType_HandTipRight].getY();
headX=joints[KinectPV2.JointType_Head].getX();  
headY=joints[KinectPV2.JointType_Head].getY();  
handY=joints[ KinectPV2.JointType_HandTipLeft].getY();
handX=joints[ KinectPV2.JointType_HandTipLeft].getX();
bottomX=joints[KinectPV2.JointType_SpineBase].getX();     
bottomY=joints[KinectPV2.JointType_SpineBase].getY();  
shoulderY=joints[KinectPV2.JointType_ShoulderRight].getY();  
    }
  }

  fill(255, 0, 0);
  text(frameRate, 50, 50);
}

//DRAW BODY
void drawBody(KJoint[] joints) {
 

  drawJoint(joints, KinectPV2.JointType_HandTipLeft);
  drawJoint(joints, KinectPV2.JointType_HandTipRight);


  drawJoint(joints, KinectPV2.JointType_Head);
    drawJoint(joints, KinectPV2.JointType_SpineBase);
}

//draw joint
void drawJoint(KJoint[] joints, int jointType) {
  pushMatrix();
  translate(joints[jointType].getX(), joints[jointType].getY());
// print(joints[jointType].getX());
 //print(",");
 //println(joints[jointType].getY());
  underwaterz.ellipse(0, 0, 25, 25);
  popMatrix();
}


void handState(int handState) {
  switch(handState) {
  case KinectPV2.HandState_Open:
    fill(0, 255, 0);
    break;
  case KinectPV2.HandState_Closed:
    fill(255, 0, 0);
    break;
  case KinectPV2.HandState_Lasso:
    fill(0, 0, 255);
    break;
  case KinectPV2.HandState_NotTracked:
    fill(255, 255, 255);
    break;
  }
  
  
  
  
  
  
}
void scanlines(int thicc) {
  translate(0, 0, 350); //needs to be translated forward so the lines dont clip with the strip
  stroke(30, 50); //added some transparency so the lines dont darken the image as much
  strokeWeight(thicc); //Can vary the thickness of the lines, but looks better consistent

  //Drawing lines from the top of the window to the bottom, with gaps to emulate real scanlines
  for (int i = 0; i < height; i++) {
    if (i % 4 == 1) {
      line(0, i, width, i);
    }
  }
}
void sun(float rotation, float sunrise) {
  PVector location = new PVector(width/2,100+height/2,-1200); //not needed, but keeping for referece
 // println(location.x);
  pushMatrix(); //Matrix so the rotations dont effect the triangle strip
  translate(0, sunrise, -7000); //Pushed far back behind the strip
  translate(width/2, 100+height/2);
  noStroke();
  fill(1060, 1000, 1000); //60
  rotateY(rotation);  
  sphereDetail(30);
  shininess(300);
  sphere(450);
  fill(50, 1000, 1000, 200); //As bright and saturated as possible, but transparency darkens it
  stroke(60, 150, 300);
  strokeWeight(0.1);
  shininess(10.0); 
  sphere(500); //A larger transparent sphere with stroke, ecompassing the smaller sphere
  popMatrix();
}
void theater() {
  // myMovie = new Movie(this, "little mermaid.mov");
  translate(0, 0, theater);
  fill(0);
  rect(0, 0, width, 50);
  rect(0, height-50, width, 50);
  image(image, -100, 50, width+200, height-100);
  image(image2, 900, 600, 150, 300);
  if (theater<110) {
    theater++;
  }
  if (theater==110) {

    fill(255);
   
    text(message1, 650, 350);
    }
    }
    
    
    
    /*
    if (sit==false) {
      text(message1, 650, 350+);
      text(message2, 600, 450);
    } else {
      text(message3, 590, 350);
      text(message4, 670, 450);
    }




    if (applause==true) {
      myMovie.loop();
      image(myMovie, 0, 0, width, height);
    }
  }
  if (bottomY==200)
  {
    sit = true;
  }
  if (abs(trackY-handY)<150 && abs(trackX-handX)<50)
  {
    applause=true;
  }
}


/*void movieEvent(Movie m) {
  m.read();
}*./



*/
void underwater(){
  myPort.write('H');
  while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
}
println(valueFromArduino);
if(valueFromArduino>0){
 
//if(millis()-m<20000){trackX=joints[ KinectPV2.JointType_HandTipRight].getX();
//trackY=joints[ KinectPV2.JointType_HandTipRight].getY();}
//else{trackX=100;trackY=200;}
  underwaterz.beginDraw();
    
  underwaterz.colorMode(RGB,100);
  for ( int i=0;i <= height;i++){
    underwaterz.stroke(14,map(i,0,height,80,0),70);
    underwaterz.line(0,i,width,i);}
    underwaterz.colorMode(HSB,360);
    underwaterz.image(mountain,1250,300);
   underwaterz.image(door,1180,530);
    
   
    /*for (int z =1; z<10;z++){
 for ( int i=10*z; i<10*(z+1)-5;i++){
   bubbleArray[i].ex--;
   bubbleArray[i].ey-=10;
  bubbleArray[i].display();
    }

 
 
    for (int i=10*(z+1)-4; i< 10*(z+1) ;i++){
   bubbleArray[i].ex++;
   bubbleArray[i].ey-=10;
    bubbleArray[i].display();
   
}}*/

      
//pushMatrix();
//translate(0,0,1.1);
  int Fishi = 0;
  while (Fishi < bouncers.size () )
  {
    Mover m = bouncers.get(Fishi);
    if (bewegungsModus != 5) m.update (bewegungsModus);
    
    Fishi = Fishi + 1;
    
  }
//popMatrix();
  //fish
 underwaterz.endDraw();
 image(underwaterz,0,0);
 //println(m-millis());
}
else{theater();}
  
}
void underwater2(){
  colorMode(RGB,100);
for ( int i=0;i <= height;i++){
    stroke(14,map(i,0,height,80,0),70);
    line(0,i,width,i);}
    colorMode(HSB,360);
}

Interaction Lab assignment – Ian from Rudi’s session

Prompt

After reading the assigned materials and checking in detail Real Life Applications and Art Installations sections in our slides, choose one example of Computer Vision. Research about that example and post on your blog your findings. Be specific about which tools you think were used and which challenges the developers faced. In your response, clarify why you chose it and how does it relate to your own experiences with Computer Vision. Due December 7th.

The example I chose and why I chose it

I chose self-driving cars both because I personally like it so much and because I believe it is a technology that will bring revolutionary change to the future. Imagine that there will be not drivers in the future; imagine there will be much fewer traffic accidents in the future… I think with the development of technology, self-driving will be perfectly achieved in the near future.

Which challenges the developers faced

I think the main challenge the developers are faced with is how to detect the cars and people on the road. In the reading, it is called Detecting presence. It is very hard to achieve considering how many objects may appear on the road: Pedestrians, cars, buses, bicycles, motorcycles, cats and dogs, etc. Pedestrians have different body shapes and cars have different colors. It is almost impossible for the computer vision to detect all these objects. A probable solution may be using a combination of sensors instead of using only one of them.

Another challenge is to detect and predict the movement of the objects on the road. The self-driving cars need to predict how and where the objects are moving so that it can take actions to avoid traffic accidents. In the reading, it is called Detecting motion. It is very hard to achieve, too because the movements of the objects are very irregular. It’s very hard even just to predict the movement of cars. How can we predict the movement of a cat on the road?

which tools you think were used

Various tools should be used. Camera is the best technology to detect different objects on the road. Radar is already a mature technology to be used on the transportations. The computer will of course be used to analyze the data colloected by sensors and make reactions. A tri-axis accelerometer can be used so that the car can tell not only its speed in the direction of forward and back, left and right, but it can also tell whether it is climbing or downhill, which is also very important to a driver. When it is climbing, the engine should work harder to avoid being crashed into by the cars behind, when it is downhill, the speed should be controlled. A light sensor like potentiometer should also be used. When it is dark, the car should know it and automatically turn on the light so that its camera can work well.

 

 

 

 

Recitation-11 Interaction Lab : Media Controller

  • Date: 2.12.2017 
  • Instructor: Rudi
  • Documented by: Yian Zhang (Ian)
  • Course: Interaction Lab
  • Thanks for help: Luis

Task

Create a Processing sketch that controls media elements (images, videos, and audio) using a physical controller made with Arduino. Think about how you can incorporate interactivity and computation into this week’s exercise.

Idea

I decided to use a moisture sensor to control a video of flowing water. When there’s water on the sensor, the video will be played so the water in the screen flows, and else the video will be paused so it’s like the water in the real world is controlling the water in the screen!

Problems

I mangaged to find a video from online, learn about how to play and pause a video through Processing, review serial communication but found that the video could not be used.

 

So I tried a lot of video converters downloaded from online and finally managed to convert the mp4 format to an mov format movie.

Then after some coding and adjustment, the project was done.

The videos

 

The Code

 

 

From this recitation, I learned how to convert videos to mov format, how to load videos to processing, how to play a video and how to call video functions. I think these skills are very important for me because it can really help me make my final project more beautiful. I would like to consider using a video as the background in my house.

Recitation-10 Interaction Lab : Object Oriented Programming

  • Date: 17.10.2017 
  • Instructor: Rudi
  • Documented by: Yian Zhang (Ian)
  • Course: Interaction Lab
  • Instructor today: Luis & Jack
  • Partner: Rosie

The recitation

In this recitation, I was in the workshop of Object Oriented Programming. Luis and Jack talked about the concept of objects and taught us how to write an object-oriented project. Object-oriented programming means programming with objects. We need to define a class first and create an array of objects in another tab. The for loop is often used to initialize an array of objects. One of the benefits of using object-oriented programming is you can have a large number of or even infinite objects on the screen but you only to write the code once. Another benefit is that the code of the class can be shared with others so it is easier to cooperate with other programmers.

The task

Our job is to make our own class, write a reference sheet, share it with our partners, and use each other’s class to make a project only reading the reference sheet.

I made a class of Mickey Mouses whose ear colors can change. Below is the class I made, the reference sheet I wrote, the code written by my partner and the effect.

 

 

 

 

Rosie made a class of feathers who can move about.Below is the class she made, the reference sheet she wrote, the code written by me and the effect.

 

Ian’s Final Project Essay- Rudi’s session

Yian Zhang
Rodolfo Cossovich
Interaction Lab
21 November 2017

                                    Final Project Essay

Definition of Interaction

Interaction is a series of actions and reactions between two people or a person and a computer. Listening, understanding, reacting are three essential parts of the interaction. Both participants of an interaction should be able to know what each other is doing, understand the meaning of the action, decide how to react, and manage to carry out the reaction.

About my final project

I am going to collaborate with Fred to work on the final project. He’s good at decoration, physical structure, and concept while I like coding and designing and have some Physics knowledge. We are going to make a scene experience project. It’s like a combination of VR and AR but it is different from both. The user can enjoy the beautiful and delicately-designed scene and can interact with it. The physical components of the project will include: a little house made by ourselves with 3 walls (the fourth wall is removed so that the project is open to other audience when the user is in the room), a curtain screen, a projector, several bulbs, several LED, a pressure sensor, a leap motion, an ultrasonic sensor and a music player.
This is the first draft of our script: The story starts in a room where the light is dim. A light is hung from the ceiling. On the screen, there is a window and a door drawn with Processing. Suddenly light comes into the room through the window and the door begins to shake, guiding the user to open the door. When the user stands in front of the door, he or she will be detected by the pressure sensor and the leap motion will be on. In the meantime, the doorknob will glow and shake, suggesting that the user should turn the doorknob. The action of the user will be detected by leap motion and if the doorknob is grabbed and turned, the door will be opened. Then the background will be changed to outdoors. The scenery is beautiful night scene of a prairie. The LED lights hung on the ceiling will decline to play the part of shiny stars. When the user stepped back, the movement will be detected by the ultrasonic sensor and then the sun will rise and the background scenery will change.

Critique an established form of interaction or specific interactive experience

The idea of the final project is based on my reflection on my midterm project, the Bomberman game controlled with joysticks. The game involved difficult codings and it ran totally very well after my hard work. However, the user does not enjoy the sense of interaction enough when playing the game. Although the joystick is a classic interaction interface, the audience will not feel excited about it because it has been used for decades and everybody is already more or less a little tired of it. More importantly, the way joystick connect the physical world and the digital world is not clever enough. In the digital world, the characters are running and putting bombs; in the physical world, the players are just moving their thumbs. There’s hardly association between thumb moving and running and putting bombs. Professor Moon gave me a very inspiring suggestion. Since the characters in the game are pictures of faces of IMA fellows, I should put an accelerometer on the head of the user so that the player can shake their face to move the “faces” in the screen! This is a wonderful way of connecting the digital and physical world. It’s very natural and great fun. The final project concentrated on this kind of connection. For example, a leap motion is used. When the user is doing the action of opening the real door, the door in the virtual world is opened! We even plan to make a real room with cardboards and 3D printing techniques to enhance this feeling of connection. It’s a little like VR and AR but different to both. More interesting sensors will be added to achieve this effect.
Another problem with my midterm project is the lack of learnability. During testing, many users did not understand the game and how to play the game. The article “Making Interactive Art: Set the Stage, Then Shut Up and Listen” pointed out that the audience should learn how to use the project by trying to use it, without the instruction of the designer. In the video of Norman door, when a user goes through a well-designed door, he will not even realize there’s a door! Again, a good interaction is very natural. There should be a doorknob on the door to suggest the door should be pulled, and if the door is to be pushed, nothing should be added to the door itself. Similarly, many hints are added to my project to guide the user. For example, the light shed in the room and the shake of the doorknob.

 

 

 

 

Ian’s Rubber Stamp – Interaction Lab,Rudi’s Setion

  • Date: 18.10.2017 
  • Instructor: Rudi
  • Documented by: Yian Zhang (Ian)
  • Course: Interaction Lab

The assignment is to design my own stamp using Illustrator.

This is my drawing:

And this is how it looks when made into a stamp:

 

The fabrication technique includes laser cutting, 3D printing and so on. Fred adn I are going to make a room where the user can experience beatiful scenery and interact with it. We can use laser cutting to decorate the room to create the atmosphere. Also, 3D printing technology can be used to make critical mechanical parts of the interaction system. For example, it can be used to make a wearable device. We will have a clearer idea of how we can use the techniques when we have a more detailed plan of our project. I believe the fabrication technique will help us a lot.

Recitation-9 Interaction Lab : 3D Modeling

  • Date: 17.10.2017 
  • Instructor: Rudi
  • Documented by: Yian Zhang (Ian)
  • Course: Interaction Lab
  • Thanks for help: Luis

Task

Using Tinkercad, design a 3D model of a wearable device, a game controller or a mechanism (for a component) that utilizes at least one of the provided components or equipment.

Concept

I made a Bomberman game for my midterm project. I solved a series of technical problems to make it and it was great fun. Many of the classmates liked it very much.

However, there is a critical problem with it. It seems that it failed to demonstrate the idea of “interaction” enough. Since I used joysticks as the interface, the digital world and the physical world are still separated. A good interaction project aims at connecting the digital world

Professor Moon gave me an excellent suggestion that I should attach an accelerometer to the head of the user so that they can control the character in the game by moving their head. Since the character are represented by the headshots in the game, it seems very natural to move the face of the user to move the “face” on the screen.

When I learned 3D printing, I realized that it was the wonderful chance to put the idea into practice.

 

Design

I made a helmet to support the accelerometer. I measured the size of the accelerometer and made a groove on the top of the helmet a little bigger than it. A hole is made on each side so that a band used for fixing the helmet on the user’s head can be connected to the helmet. Four tiny holes are made to allow wire connections. Below are the pictures of the helmet form different angles.

 

Tour

Lead by professor Marcela, we were shown a series of devices including 3D printers and laser cutting equipment. The devices were so cool and I hope they can help me make outstanding projects!

Recitation-8 Interaction Lab : Drawing Machines

  • Date: 11.10.2017 
  • Instructor: Rudi
  • Documented by: Yian Zhang (Ian)
  • Partner: Eos
  • Course: Interaction Lab
  • Thanks for help: Luis

Tasks

Create drawing machines by using an H-bridge to control stepper motors attached to mechanical arms.

Part 1: Build the following circuit to control the stepper.

Part 2: Use your potentiometer and the MotorKnob example to control your motor.

Part 3: Write a sketch on Processing that sends values to Arduino. Replace the potentiometer by using the values from Processing to control the motor.

Part 4: Then, find another person to work with. Combine your parts in a mechanical arm that can hold a marker.

Potentiometer-controlled motor

A potentiometer was added to the circuit. By using the function “map”, we can control the motor by spinning the potentiometer.

Sometimes the motor vibrated very seriously and other times it kept revolving without stop. I checked the code and the circuit twice but found no problem so I examined the connection carefully. I found the cause of the problem was that the potentiometer was not plugged deeply enough into the breadboard so my computer sometimes failed to collect data from the potentiometer. I then fixed the problem and it worked very well as shown in the video below.

 

Using processing to control the motor

I removed the potentiometer and wrote a simple script on Processing to control the motor. The variable “mouseX” was sent to Arduino. I set the width as 200 so that I would not need to map.

A problem came up that once I moved my mouse, the motor kept working and would not stop. The reason was that I wrote “int val=0;” in the loop part so “val” becomes 0 at the beginning of every single loop. I fixed the problem by making “val” a universal variable.

Working with the partner to draw something crazy!

We combined our projects and made a new one that can hold and move a marker. The picture is really crazy. This made me think of Jack’s project shown in the Maker Carnival. He made a robot that can draw pictures that can only be understood when seen from distance. Drawing is an interesting topic and maybe we can do more about it with Arduino and Processing. Below are the video, the drawn picture, and the code.

 

Testing report for Tiger’s project- Interaction Lab (Rudi’s session)

  • Date: 11.3.2017 
  • Instructor: Rudi
  • Documented by: Yian Zhang (Ian)
  • Course: Interaction Lab

Profect description

Tiger made a super interesting two-player game. One player can pull the Doraemon’s tongue through a slide potentiometer to enable Doraemon to eat the cake. The other player can use the other potentiometer to move the cake to escape the destiny of being eaten.

 

Suggestions

1, I felt like I could not understand the game very well. At first, I did not even know this was a two-player game! Tiger needs to put some instructions before the game.

2, When someone won the game, there were no evident suggestions and the game continued. I would advise him to put some words on the screen when a player won the game.

3.The movement of the tongue and the cake was not smooth enough. They were blinking instead of moving continuously. My project has a quite good movement effect. Maybe he can read my code lol.

Shining points

1, The graphic is wonderful. Doraemon is very cute and attracts players.

2, The sensor he chose was great. When you move the slide potentiometer, you change the length of the tongue. He used the slide potentiometer to simulate the tongue. This is exactly what I should learn from him.

 

Testing report for my own project- Interaction Lab (Rudi’s session)

  • Date: 11.3.2017 
  • Instructor: Rudi
  • Documented by: Yian Zhang (Ian)
  • Course: Interaction Lab

We showed our midterms to others and got feedback this time. I received several useful and inspiring suggestions.

Before testing

I made a list of things I wanted to know about my project before recitation.

1, Do the users hold the joystick in the right way?

2, Is there any bug I did not find?

3, Do the users understand how to control the character?

4, Do they understand the game?

Observation and feedback

When others were testing my project, I observed them and took some notes. I also talked them after that.I found that some of them could not understand the game well. They did not understand how they could win. Marcela told me that she thought they were going to chase each other and the one caught the other won the game.

I found that some of them could not understand the game well. They did not understand how they could win. Marcela told me that she thought they were going to chase each other and the one caught the other won the game.

Another thing is many did not know how to use the joystick properly. Most of them knew how to move the character but many of them still did not know that they could press the button to put the bomb. Many of them still held the joystick upside down despite the tag I put on the joystick.

I also found a bug that sometimes when one of the users pressed the button, the bomb appeared at the corner of the screen, instead of appearing at where the character was. I guess this is because of a mistake with one of the if statement.

Improvement Plan

  1. Design a “restart” function so that the users will not need to close the window and run the project again each time the game ended.
  2. Make some noise when the bomb explodes to make the game more fun.
  3. Add more detailed and clear instruction so that every player could understand the game.
  4. Try to use other ways of interaction. The joystick is good to understand but it is less fun and it cannot create a sense that the physical world and the digital world are connected. Moon gave me a piece of rather exciting idea. I could put an accelerometer on the head of the user. The users can control the character by shaking and moving their head. Since the characters in the game were pictures of faces, it sounds very reasonable for a user to move their face to move the “face” in the screen.