29th April Responses

Response to Cerecares Field Trip

What I loved about Cerecare most is how they were determined to see their patients not by their disabilities, but just as people who had skills and capabilities, who could and wanted to learn. A great example of this is when we were told that they were trying to train some of the older children in massage therapy. They were seeing them for their strengths, which many assistive techs rely upon, not just their weaknesses.

In terms of our course work, I think the visit was necessary because it reinforced the idea that low tech options work in practice and shouldn’t be ignored. I thought it was great how they were using a local carpenter to recreate items they found online that were designed to help with or resolve certain challenges. Seeing these solutions also reminded us that there are already some things created to assist with certain conditions. Knowing this makes it more imperative that we as creators are always looking forward to try to create solutions that are either cheaper and faster to create or quantifiably better. Communicating with the caretakers was also important because it allowed us to hear first-hand what they identify as being the most pressing issues that need remedies. Hearing this is vital because too often it is the case of able-bodied individuals to imagine problems that may not exist or may not be the most pressing.

With this said, I also shouldn’t underestimate how important it was to come in as an outsider with a fresh set of eyes. This was a lesson I learned during my Nutrition class in Accra and that I’m happy to see has been reintroduced in this course. In terms of nutrition, it is sometimes hard for those inside of a society to identify their problems because they may know no difference. Likewise, it may be hard for those who deal with these children every day to fully acknowledge what might make life better or easier just because they may not acknowledge there is another way. Because of this I think it is important as outsiders to ask questions of the caretakers and be actively observing to see if you may be able to see something as an outsider that might be harder for someone in the inside. An example of this came when I asked our tour guide about whether it was a problem for the children to keep their heads up. To this question, our tour guide admitted that the current solution was just to hold the children up, but that it would be great if there was a solution for this.

Overall, I think that this trip has created a great jumping off point for me to start exploring what I might want to ideate as solutions.

Using Assistive Features 

For this task, I decided to take the switch controllers to try and search for a video on Youtube. Though there was a bit of a learning curve (for instance, I first misunderstood what was meant by position), once you learn how to use it, it becomes pretty intuitive to use. Though it takes longer to complete tasks, it is still in innovative means to solve a problem.

Everyday Technology Chart

Screen Shot 2017-05-03 at 8.34.59 AM Screen Shot 2017-05-03 at 9.12.46 AM Screen Shot 2017-05-03 at 9.12.52 AM

[Ix Lab AW] Final Project Documentation-Jinglan Meng

Project Name: Save Me If You Can!

Documented by: Jinglan Meng

Partner: Yanyu Zhu

Instructor: AW

Class Presentation Date: December 12th, 2016

Documentation Date: December 13th, 2016

Subtitle: A stress release interaction – feed candy to the puppet living on the computer by beating the drum in order to save her from the growing chain.

Project description: The “save me if you can” project is played by using a simple DIY drum and a computer screen. Players can interact with the little ball by beating the drum in reality. Once it is within the “mouth” area, beat the drum and you can catch the ball. You win this time. But you do not have any other competitors so that you will not lose because you will always have the chance to catch the ball then win. In addition, with the path the ball going along, it leaves a colorful trace on the background so that in the end, there will automatically be a painting if you catch the ball early. If you catch it quite late, it will look like chains and the puppet is banned in prisons, which looks very scary! Both ways improve user experience, and are quite fun. In this way, the goal of stress-relieving can be achieved.

Target: Our target audience are people who suffer from stress in everyday life and needs something fun and interactive to relax themselves (especially people from NYUSH community tortured by finals). We created our project with this idea in mind.

Here is the demo of our project: 

Screenshot:

screen-shot-2016-12-13-at-17-44-12

Our Drum! 

img_5772

img_5773

About the drum: The mouth on the surface of drum is exactly the same as the one on the screen because we would like make it look more lovely and user may feel more familiar with it. We also have bells surrounding the drum so that everytime you beat the drum, the sound is very beautiful. We made the colorful supporting bottom in order to fit the average height of people, or the drum would be too low. And these colors were intended to make users feel happy.

Screen Recordingscreenrecording

Here is me beating the drum to catch the ballsave-me-if-you-can

Introduction & Making Process: For this interaction lab final project, both Yanyu and I realized the immense potential of our mid-term project after getting advice from professors and classmates and our own deep self-reflection, and we are so lucky to be able to team up together again to perfect our crazy mid-term project.

Here are the limitations and directions for improvement we thought about after finishing our mid-term project from which we planned to better our previous one in the final.

  • We planned to use pixel to recognize color in order to restrain the ball within the body area (suggested by professor Moon).
  • We planned to experiment different ways of instantiating the “keypressed” move to a more physical one, such as pressing a really big button (suggested by professor AJ), using weight sensor to let the user jump on it, or a sensor that can track the user’s hand or body movement, and so on and so forth.
  • We also wanted to also ask our friends to experiment with the game and collect suggestions from them in order to create a game with the best user experience possible.
  • We were also thinking about including physics laws in the movement of the balls to make it more real.
  • We also wanted to create a story for our puppet to engage the users more, for example, the puppet dies form hunger of not eating enough candies (balls) in the required time.

We seriously considered all of those possible future directions while brainstorming for the final project. We also changed some of our ideas mid-way through because we found them not practical enough.

For the first direction for example, we originally planned to use “pixel” to restrain the moving area of the ball, we asked several professors for help and indeed at last made it happen. However, we found at last that if we use “pixel”, the running speed of Processing would be super slow because the “pixels” function decelerated the speed of the whole program, which is definitely not something we wanted to see happening. Therefore, although we spend a whole lot of time on it, we gave it up at last and though about resorting to other possible ways. It’s worth though because we learned a lot about the use of pixels that would have been achieved if not for this try. Then we thought about mathematical function and use distance function to restrain the ball and then we wrote the function (which is super complicated), checked for several times, only to find that it did not work and no one would bother to help us check it so we thought, no we do not really need to restrain the ball! Why not just let it bounce everywhere so the trace can form a chain and create the visual simulation of locking our puppet up if the ball is not caught for a long time, which just happen to be the story we wanted to create! So basically at last we just let the ball go anywhere it wants. And then for the second direction, we thought that it was a good idea to bring the virtual drum into real life, and we built a drum ourselves and put a vibration sensor and Arduino inside of it. For the forth one, we though that it would be such a mess to bring real physics law into our game and we asked several friends who all prefer the original one so we gave this idea up.

Conceptual development:

Ideation

Inspirations: This project is a continuation from our mid-term project. We made an improved version of pin-ball game for our mid-term project with various kinds of rolling and bouncing and balls and dots. We got some really good suggestions from classmates and professors and we also thought very hard about how to improve the interaction on our own. We adopted professor AJ’s idea of making the key pressed button bigger so we moved the virtual drum to real life! We recycled some used paperboard from shoe boxes in a spirit of being environmental-friendly, cut and glued them together, pasted color papers on them and then we made a quite pretty drum! So the movement of clicking keyboard is transformed into beating the drum, making it become more interactive and stress-relieving than the previous one.

In order to create the best user experience possible, we came up with the idea of making a drawing machine, which is presented by the trace ball leaves in different colors. We met a lot of trails while doing this and finally due to professor Antonius’s help we deployed the method of add another canvas to the original one. And every time the ball gets eaten, we add another transparent background to the original one so that the previous traces will all disappear! Furthermore, we set the color of the trace to be according to the x-coordinate and y-coordinate to enrich the esthetic experience of the users.

Also, we applied what we learned in the second half of the semester and added a music to our project to further intrigue the users and it works!

Research

Reading & Reference: We reviewed the class notes about the way to add a music piece, and also the notes to about serial communication to achieve the mutual code transportation between Arduino and Processing. What’s more, we even learned some new code from professors, for example, Professor Antonius taught us how to add another canvas to the original one by using PGraphics. Making this project gives us a wonderful opportunity to go over the code learned in class and enhance our memory of them by active application, and also the chance to learn some new code from professors that we never had a chance to learn and apply without working on this project. Really appreciate it.

Arduino: The process for connecting the circuit is not very hard. We add a buzzer to check if the sensor works without using processing. The “sensor_value” is used to check if it is working both in Arduino and Processing. These small tips are very important and efficient.

img_5758

Lessons Learned:

Test & Trials & Errors & Know-Hows & Tips

  1. We experienced a lot of trials while doing the ball trace. We initially thought that since the ball originally has trace but its just that it is blocked by other drawings in the code. So we moved all the other static drawings from the function “void draw()” to “void setup()” in hope of creating the desirable trace. However, because we have too many drawings it was a really complicated and tough work, and after we have done that we find that the moving objects (eyes, the little moving ball) all had traces and what’s worse, they seemed irremovable because we asked several professors and did not work it out. Therefore, we abandoned this idea and began to search for other ones. We asked professor Antonius for help who introduced PGraphics to us. We thought that this function is really helpful for our project and applied it immediately and then it actually worked! However, we also wanted to remove all the previous traces when the ball is eaten because otherwise there would be too many chains on the screen. We tried several functions but they all did not work. We felt desperate and asked a professor nearby for help. We explained our project and the problem to him, and he suggested adding another transparent background every time the ball is eaten. We tried the function and it worked again! From this problem-solving process, we learned the lesson to not give up until we make it, no matter how difficult it is and how desperate we feel. Also, when we have no idea what to do by ourselves, it is a good idea to consult the professors.

2.Time management:  we did not make full use of our time during the process. We spent a lot of time on restricting areas by using pixels. But pixels decelerated the speed of the whole program. So we tried to use a mathematic equation in the “if” function. Later, with Antonius’ kind reminder, we decided to give up this idea because it cost a whole lot of time and does not do much help to the progress of our project. I learned the lesson that we should have made a priority list to remind us what thing to do first and what to do next, and also do not stick to one thing for a long time when there are other important things to do.

3.Teamwork: when we first teamed up to do the group work, we both thought that it is the most efficient that we split the work and each one focuses on different part and then we meet and incorporate them together. It might be a good idea but really depends on what work to split. For example, we split code writing for our mid-term project and it turned out that we had to spend much more time incorporating them. Therefore, this time when we were working on our final project, we decided to write the code together and split other work that could really be divided such as the making of the drum and so on. It worked at last.

Limitation + Future development direction:

Based on suggestions from professors and other users, we were happy that most people like our project. However, as professor Antonius pointed out in class, we need to let the mouth close when the drum is beaten because otherwise the players would not know what caused the ball to not be eaten, is there something wrong with the program or something wrong with their skills. Our project is a little bit hard to play, therefore, it becomes important to let the user know that the program is running correctly.

And then here is our code: (the main tag is in the source code!)

Tag2: Draw:
// Send Data from Processing to Arduino
// This code is for the Processing IDE
import cc.arduino.*;
Arduino arduino;
class Ball {
// declare the Ball variables
int x, y;
int xSpeed, ySpeed;
int c;
int counter=0;boolean check=false;//to make the ball disappear after it is eaten
// constructor
Ball(int parameterX, int parameterY) {
// pass the initial variables to the ball
x = parameterX;
y = parameterY;
// give initial default values to the other variables
c = 20;
xSpeed = 0;
ySpeed = 20;
}
//if(y == 0){
// xSpeed=5;
// ySpeed=4;
//} i once wanted to set the speed of ball after it shoots out here but later i found it not global. so i keep it in the move function part.
// Draw the ball
void render() {
println(“render ” + counter);
if (counter>0){
counter=counter-1;
}else{
noStroke();
fill(#EA5C5C);
ellipseMode(CENTER);
fill(#EA5C5C);
ellipse(x, y, c, c);
}
}
// Move the ball
void move() {
if (x>width||x<0) {
xSpeed = xSpeed*-1; // to make it go back
}
if (y>height||y<0) {
ySpeed = ySpeed *-1; // to make it go back
}
if (y==0) {
xSpeed=(int)random(9, 15); //at first, the speed was concrete, which makes the move very inflexiable.
ySpeed=(int)random(6, 15); //so i set a random function and make it from float to int.
}
x+=xSpeed;
y+=ySpeed;
if (counter>0){
counter=counter-1;
}else{
pg.beginDraw();
pg.noStroke();pg.fill(150, y, x, 140);
pg.ellipse(x, y, xSpeed, xSpeed);
pg.endDraw();
image(pg, 0, 1);
}
}
//explode the ball
void explode() {
if (mousePressed&&dist(mouseX, mouseY, x, y)<=c) {
println(“here”);
bc = color(0, 0, 100);
}
}
void eattheball() {
////vibration control
if (val > 0 &&dist(x, y, 250, 260)<=58) {//since the mouth is not a circle so it is hard to locate precisely
//we would like to improve it by useing “pixel” in the future.
fill(248, 227, 178);
arc(250, 260, 220, 220, 0, PI);
//check=true;//to make the ball disappear after it is eaten
ps.addParticle();
ps.run();
//delay(100);
counter=100;
//println(“eat the ball”);pg.beginDraw();
pg.noStroke();
pg.background(0,0);
pg.fill(150, y, x, 140);
pg.ellipse(x, y, xSpeed, xSpeed);
pg.endDraw();
image(pg, 0, 1);
}
}
void resetTheBall() {
// give initial default values to the other variables
if (mousePressed) {
c = 20;
xSpeed = floor(random(9, 15));
ySpeed = floor(random(6, 15));
check=false;
pg.beginDraw();
pg.noStroke();
pg.background(0,0);
pg.fill(150, y, x, 140);
pg.ellipse(x, y, xSpeed, xSpeed);
pg.endDraw();
image(pg, 0, 1);
}
}
}
Tag3: Particle(the splash)
// A simple Particle class
class Particle {
PVector position;
PVector velocity;
PVector acceleration;
float lifespan;
Particle(PVector l) {
acceleration = new PVector(0, -5);
velocity = new PVector(random(-100, 100), random(-80,80));
position = l.copy();
lifespan = 255.0;
}
void run() {
update();
display();
}
// Method to update position
void update() {
velocity.add(acceleration);
position.add(velocity);
lifespan -= 1.0;
}
// Method to display
void display() {
fill(255, lifespan);
stroke(100, position.y, position.x, lifespan);
ellipse(position.x, position.y, 8, 8);
}
// Is the particle still useful?
boolean isDead() {
if (lifespan < 0.0) {
return true;
} else {
return false;
}
}
}
Tag4: particle system
// A class to describe a group of Particles
// An ArrayList is used to manage the list of Particles
class ParticleSystem {
ArrayList<Particle> particles;
PVector origin;
ParticleSystem(PVector position) {
origin = position.copy();
particles = new ArrayList<Particle>();
}
void addParticle() {
particles.add(new Particle(origin));
}
void run() {
for (int i = particles.size()-1; i >= 0; i–) {
Particle p = particles.get(i);
p.run();
if (p.isDead()) {
particles.remove(i);
}
}
}
}
Here is the Ardunio code:
#include “pitches.h”
int sensorValue;
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
}
void loop() {
// put your main code here, to run repeatedly:
sensorValue = analogRead(0);
if (sensorValue > 0) {
Serial.println(1);
tone(8, NOTE_C4, 1000);
delay(60);
} else{
Serial.println(0);
noTone(8);
}
delay(5);}

Then the Arduino code:

#include “pitches.h”

int sensorValue;

void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
}

void loop() {
// put your main code here, to run repeatedly:
sensorValue = analogRead(0);
Serial.write(sensorValue/4);
if (sensorValue > 0) {
tone(8, NOTE_C4, 1000);
delay(60);
} else{
noTone(8);
}
delay(5);
}

import processing.serial.*;
import processing.sound.*;
SoundFile file;

PGraphics pg;

Serial myPort;  // Create object from Serial class
int val;      // Data received from the serial port

//to start a new game
float speed=0.2;//the speed of the rotating eyes
float a;//angle of the rotating hand
int c = 600;//vertical coordinate of the ball bouncing up and down
float speed2=5;//speed of the ball bouncing up and down
int w; //width of the ball bouncing up and down
int h;//height of the ball bouncing up and down


float[] size = new float[100];//create a new float "size"
float[] rot = new float[100];//create a new float "rot"
float[] s1 = new float[100];//create a new float "s1"
float[] s2 = new float[100];//create a new float "s2"
int[] col = new int[100];//create a new int "col"

color[] palette = {
  #EFFFCD, #555152, #DCE9BE, #2E2633, #99173C
};//the five colors for the eyes
float theta;

ParticleSystem ps;
Ball b1;
//Ball b2;
boolean showpipe=true;
color bc= color(255);


void setup() {
  size(500, 700);
  pg = createGraphics(500, 700); // make a new canvas not shown here
  file=new SoundFile(this, "135.mp3");
  file.play();
  //serial communication
  String portName = Serial.list()[Serial.list().length - 1];
  println(portName);
  myPort = new Serial(this, portName, 9600);

  ps = new ParticleSystem(new PVector(width/2, 600));


  b1 = new Ball(width, 300);
 // b2 = new Ball(width/2, height/4);


  float Sz = 0;//set Sz to be 0 in the beginning
  strokeCap(SQUARE);//set the line ending to be square shape

  ///////////////////////////////the left eye
  for (int i=0; i<100; i++) {//set integer i, 
    //i equals to 0 in the beginning and keep increasing when i is less than 100
    Sz += random(0.01, 0.3);///size of left rotating eyes
    size[i] = Sz; 
    rot[i]= PI/100*i;//angle of each rotating
    col[i] = (int) random(0, 5);//each time choose color form the palette
    s1[i] = random(0, TWO_PI);
    s2[i] = s1[i] + random(PI/4, PI);
  }
  ///////////////////////////////the right eye
  for (int q=0; q<100; q++) {
    Sz += random(0.6, 0.8);///size of right rotating eyes
    size[q] = Sz ;
    rot[q]= PI/100*q;
    col[q] = (int) random(0, 5);
    s1[q] = random(0, TWO_PI);
    s2[q] = s1[q] + random(PI/4, PI);
  }
}
void draw() {

  while ( myPort.available() > 0) {  // If data is available,
    //    val = myPort.read();         // read it and store it in val
    String myString = myPort.readStringUntil('n');
    if (myString!=null) {
      myString = trim(myString);
      val = int(myString);
      //println("sensor value: ", val);
    }
  }

  
  //  bouncingball
  noStroke();
  background(203, 173, 110);
  Drawing();
  bouncingball();
  /////////////////////////////////////////////
  strokeWeight(2);
  translate(150, 210);
  for (int i=0; i<100; i++) {
    pushMatrix();
    rotate(theta);
    stroke(palette[i%5], 100);
    noFill();
    arc(0, 0, size[i], size[i], s1[i], s2[i]);
    popMatrix();
  }

  strokeWeight(2);
  translate(200, 0);
  for (int q=0; q<100; q++) {
    pushMatrix();
    rotate(theta);
    stroke(palette[q%5], 100);
    noFill();
    arc(0, 0, size[q], size[q], s1[q], s2[q]);
    popMatrix();
  }
  theta += 0.383;///rotate speed

  /////////////////////BOUNCING BALL
}


void bouncingball() {

  // draw the ball
  if (!b1.check) {
    b1.render();
  }
  //b2.render();
  // move the ball
  b1.move();
  //b2.move();
  //explode the ball
  b1.explode();
 // b2.explode();
  //
  b1.eattheball();
  b1.resetTheBall();
  //
  keyPressed();
}



void Drawing() {
  fill(248, 227, 178);
  ellipse(250, 150, 420, 300);//layer hair

  fill(84, 49, 80);
  ellipse(250, 180, 400, 305);//upper hair


  fill(248, 227, 178);
  ellipse(320, 440, 260, 200);//arm
  ellipse(180, 440, 260, 200);//arm
  fill(203, 173, 110);
  ellipse(320, 440, 130, 130);//arm
  ellipse(180, 440, 130, 130);//arm

  fill(248, 227, 178);
  ellipse(250, 210, 390, 340);//head
  triangle(250, 780, 130, 300, 380, 300);//body

  fill(134, 133, 113);
  ellipse(150, 210, 40, 40);//eye
  ellipse(350, 210, 40, 40);//eye
  fill(10);
  ellipse(150, 210, 30, 30);//eye
  ellipse(350, 210, 30, 30);//eye

  fill(9, 47, 120);
  ellipse(250, 15, 30, 20);//head decoration 

  fill(35, 120, 35);
  ellipse(200, 20, 20, 20);
  fill(4, 120, 127);
  ellipse(300, 20, 20, 20);//head decorations 

  fill(221, 46, 12);
  ellipse(150, 35, 20, 20);
  fill(245, 137, 45);
  ellipse(350, 35, 20, 20);//head decorations 

  fill(111, 168, 138);
  ellipse(100, 64, 20, 20);
  fill(64, 128, 31);
  ellipse(400, 64, 20, 20);//head decorations

  fill(8, 48, 123);
  ellipse(65, 102, 20, 20);
  fill(224, 41, 91);
  ellipse(435, 102, 20, 20);//head decorations
  ////////////////////////////////////////////////////

  strokeWeight(12);
  stroke(84, 49, 80);

  line(310, 540, width/2+35, height/2+210);//the stable hand
  ////////////////below is the trace



  //////////the beating hand
  pushMatrix();
  translate(width/2-55, height/2+190);
  rotate(a);
  line(0, 0, 60, -25); 
  a=0;  
  a+=speed;
  a+=speed;
  speed=speed*-1;
  popMatrix();
  ///////////////////////////////////////////////

  int c=#B40505;
  int b=#B40505;//color of mouth
  strokeWeight(10);
  stroke(c);
  line(140, 260, 360, 260);
  noStroke();
  fill(b);//mouth 
  arc(250, 260, 220, 190, 0, PI);//mouth, it will disappear if being touched
  noStroke();
  fill(#FAE6E6);
  rect(218, 261, 30, 37, 3, 6, 12, 18 );//teeth
  rect(255, 261, 30, 37, 3, 6, 12, 18 );//teeth

  fill(#9B050A);
  ellipse(250, 560, 100, 20);
}
//////////draw the ball bouncing up and down
void keyPressed() {
  fill(250, 234, 201);
  ellipse(430, c, w, h);//body of ball
  fill(0);
  ellipse(417, c-2, 5, 5);
  ellipse(432, c-2, 5, 5);//eyes of ball
  strokeWeight(1);
  stroke(0);
  line(430, c-20, 430, c-30);//antenna of ball
  noStroke();
  fill(#F50A2D);
  ellipse(430, c-30, 10, 10);//head decoration of ball


  c-=speed2;//ball bounces up 
  if (c>=670 ||c<535) {
    speed2=speed2*-1;//when reach the arm ball bounce back down
  }
  if (c>=660) {//when ball is on the ground
    h =34;
    w=55;
  } else {
    h=40;
    w =50;//ball change its shape becomes flatter
  }
}

The Flying Chair Experience – Part 5

The day of:

We had our cubicle ready and we borrowed a chair from a classroom nearby. We had our sensor working with the sketch and I managed to figure out the “0” bug from before 😀 So no more constant flashing back to black screen unless the sensor was not covered.

I brought a seat pad to hide the sensor underneath and tested a few times and it was okay, worked well. We borrowed several projectors to project on the white surface and this process was more difficult than expected. Very difficult.

First, we could not figure out the best place for the projector because being too close to the fabric, it would not extend enough. We tried bringing a table behind the chair to sit the projector on it, we tried placing it underneath the chair, and next to the chair…It was hard to find the perfect place but after the suggestion of Eric, we decided to have it beside the chair elevated on the ground because that was the best we could provide.

Cubicle and chair and arrangement of projector and computer:

img_2402 img_2405

Sensor and picture of it working as Ferwa is sitting on the chair:

img_20161205_125021 img_20161205_125219 img_20161205_144051

Second, for some reason Processing was not working with Madmapper. We looked up videos and information on the web and followed instructions. I somehow managed to make madmapper sense the Processing sketch and it looked like it was available, but nothing was projected onto the screen in mad mapper. That was weird and we did not know what to do, so we after fiddling with trying to figure this out, we decided to just project normally. With that, it was hard because we could not map the edges and somehow the projector was not exactly projecting the way we wanted so the experience was not quite there yet.

After presenting the work, I felt that we did a good job for what we could do until Monday. We struggled with things here and there but we pulled through, we managed to build a cubicle, and we had the projection going which was more than half of what we had planned for the final. I think that we had good team work, and definitely can improve on this project a lot.

Video of user interaction:

We also received very useful feedback and comments on the project which hopefully we will be able to implement some of them for the show. One of the main ones was to move the projector to behind the cubicle so that it projected on the outside of the fabric but could also be viewed from the inside which would be very helpful for the experience we want to go for. We also got suggested to use pressure sensors for the trigger as the person sits down or some sort of conductive fabric cushion for the chair so that the user does not know there is a sensor and the surface area for the sensing to happen is larger than a single small light sensor. Other suggestions and feedback we received included making the experience more immersive by implementing balloons or wings and goggles for the user to put on as he sits on the chair.

I think that there definitely is room for improvement and it will be good to see how this project progresses.

Some final reflection thoughts:

I really enjoyed everyone’s projects and I think as a class, we progressed through the course with struggles here and there, getting the wood shop, availability of materials and spaces, but we all did something by the end of the course. We all did something bigger than we would have been able to do in any other class.

This class has taught me to look at space differently, to experience space differently and make me ask myself questions as I walk the streets in Shanghai. I definitely feel that my perception of walking down streets has changed, and that the way I look at interaction has also changed. I am also a Resident Assistant, and we have to make bulletin boards, and one of the recent comments I received from my supervisor was that some of my boards involved interactions with the audience that was not the obvious one (make residents write things on the board) but they were always different for each of my bulletin boards and they did not require writing. I reflected on this, and it feels to me that this is not a coincidence, I do look at space differently and try to find ways in which people can interact with a space differently.

REALLY ENJOYED THE COURSE!

The Flying Chair Experience – Part 4

The class was assigned a new space Room 933!

We had most of the wood cut out and drilled together and we were placing them in the shape that we wanted before cutting the base support that would hold all the tall beams together.

Picture of progress and shape of cubicle:

img_2395 img_2398 img_20161204_132910

In the days before the final project was to be shown, we were constantly up in room 933 and down in the woodshop cutting materials bringing it back up and drilling them and sticking them together.

It was nice to see all the groups there working and it was just a nice building environment because for me personally, it felt that we were all in this together as a class and I felt supported. So although the work was tiring and demanded a lot, it was relatively nice to take note of the progress as the time passed and to see how everything was slowly coming together.

More pictures of the cubicle getting build up:

img_2396

(We had originally drilled the top bars between each other and then realised that this was not the best way and had to unscrew the top connecting parts and then drill them from above – which made more sense so the thin piece of wood was resting above the beams)

One of the challenges and things that I took away from those intense work days was: to make sure that your hands were out of the way when you used the drill and that the surface was stable and steady before drilling, as well as you can build things in any position that is comfortable for you as long as you are steady. I think that after this, I will not be too afraid to step on ladders and do things like drilling sideways and being on my knees crouching, as long as I made sure the drill was straight.

Because we couldn’t place our materials on something so that we could clamp it secure, it was hard to find the right position to drill in, especially being a two people team, we both had to hold onto the wood tightly so that one of us could drill the hole.

Another challenge that was partly amusing was that we were not only two people on the team, we were also the shortest, and dealing with 2m tall beams and trying to hold it in place while drilling and hammering nails was very hard and demanding. BUT we made it and I am extremely grateful to my right hand for having stayed with me while I hammered at least 20+ nails. 😀

We bought a long piece of white fabric that we cut into 2m10cm long so that we could stretch over the wood and nail it in place. That took a while and my arm is a bit sore but I am glad we took a lot of effort and meticulously did the work because we wanted it to be good.

Pictures of the fabric and nailing process:

img_2398

Nailing, nailing, nailing, hammering, hammering, hammering

img_2400 img_2401

The Flying Chair Experience – Part 3

Tuesday 29th November work update:

We had made this sketch drawing of how we wanted our cubicle to look like. We had the beams cut out and now it was just time to measure out the actual space and get the dimensions in order to make the support between the 2m beams. On the sketch drawing (which we carried everywhere + occasionally forgot in the Wood shop), we said we want our cubicle to be a nice trapezoid. So we figured out the length, now we had to figure out the angle so that we could make a support at a later stage for it. Who would have thought that math knowledge would be so important now (haven’t taken a math class in 3 years) D:. BUT we did some trigonometry and it was around 110degrees (larger angles of the trapezium).

Drawing:

img_20161210_094724 img_20161210_094729

Enough of math though, what we got done on Tuesday 29th of November was to get the 2m beams to stand on their own! As we were two people team, we took turns equally to drill holes in the wood and screw screws in place, so documenting the process was not that easy, but we did manage to get some pictures and videos. I never thought that I would actually cut wood, make holes and use all of the equipments we made use of to create something, to make something of my own with my own hands. It makes me realise how much work it takes to build things, to make a chair or a table or anything of the sort. It takes a lot of precision as well, and accurate measurements to get all the dimensions required perfect. I really enjoyed the process we went through and feel confident of handling materials now.

Take aways:

  • Always have a pencil on you for measurements
  • When drilling a hole, keep the drill stable and straight
  • Keep the drill on your side and do not have your face over it when drilling
  • Always make sure your clamp is in place and tight

The pieces of wood that we would attach together:

img_2377

Clamping wood onto the table surface:

img_2370

Making a line for drilling reference:

img_2374

Repeated process with more beams and drilling:

img_2380 img_2381

Video of me drilling a hole in the wood:

 

Catwalk

Interactive Installation

Jia Rong

2016/12/206

Final Project Documentation

Have you ever imagine yourself rocking the runway like a star? Do you want the lights spotting on you when you make each step? Do you love posing in front of the camera? To provide such an experience, Our group (Kevin, Saphya, Jia) produced an technologically interactive installation—Catwalk.

Catwalk is a final project for the course Interactive Installation in Fall semester, 2016 at NYU Shanghai. It is a catwalk that when audience step on it, the lights attached on the whiteboard flaked by the wall will turn up one by one according to their steps and the camera set at the end of the walk will shoot pictures of audience at their last step.

The brief for this project is to create a temporary and portable interactive installation that employs technology and it will be located in NYU Shanghai. Taking the space and possibly employed technology into consideration, we designed the catwalk as following It is a 3-meters*1-meter wooden rectangle covered by red felting carpet.The total area is divided into 6 parts using wood and foam. Conductive tapes were attached on the wood in each of these area as well as  cupboards beneath the carpet. When audience step on the catwalk, these two components will connect together, serving as  a push button, and trigger the light through arduino.

We were finally assigned a corner of a classroom for the project. After measuring the size and considering the space between our project with other group’s project, we set the catwalk against the wall closely. Two rolling whiteboards were employed then to close off the inside. We hang up the black felt material on the wall as well as the whiteboards. After turning off the room lights, we managed to create a relatively dark and private area for the catwalk, where the string of 3 LED lights can be bright enough to create the sense of spotlight.At the end of the walk, an emitter sensor is attached on the whiteboard. The audience will stand on the last area that will trigger the sensor and a real camera will take a picture of the audience. We also plan to draw a paparazzi on the black wallpaper behind the camera to offer a more immersive experience for the audience.

On the last day of class, several IMA professors were invited to our class and interacted with our project. Professor Matt was the first one and he was a bit shock when he walked to the end and the camera made the flash and took the picture. He wowed and jumped a bit  backwards. This was an unexpected reaction in that our group were to familiar with the project and didn’t think this dark environment actually created a sense of horror. When he walked back, he tripped down a bit and this made us realize the necessity to clean up the carpet and clarify the edge of catwalk. However, after this surprising beginning, other participants had a lot of fun walking the runway and taking pictures in various postures. The dark environment seems to give audience more sense of safety and privacy that allows them to immerse in the environment and enjoy the runway.

One of the professors also brought up the discussion about the context the runway setts up. The camera and paparazzi, along with the pictures of unprepared audience create a moment of high comedy; whereas the read carpet is supposed to create a funky moment. The mixture of these two moments is not what we planned at the beginning but is a really pleasant surprise.

The most difficult part for this final project is the technology — to code the whole program in ardunio and processing. One of the group member Kevin firstly went to professor Eric to figure out how can make and sensor  and the camera  work. Saphya consulted other IMA fellows and come up with the code for one push button work. Kevin and I then combined two part of the code together which can work for 5 lights and one camera. We then designed the circuit and wire connections on whiteboard. After this code worked, Saphya then added in the code for sound effect and simplify the whole codes.

Because of the time limit, we didn’t manage to paint the paparazzi and make sound effect work perfectly. Moving forward, we decided to modify the defects and add in more sound effect for the IMA final show. Moreover, since we are expecting a much larger size of audience to come to the show and interact with the project we will clean up the safety issue and take some measures to protect out laptop, circuit and ardunio.

I wish this installation can give audience an delightful experience and attract more people via word-of-mouth.

Final Project Documentation || On The Catwalk || Kevin Pham

The installation On The Catwalk, produced by Jia Rong, Kevin Pham, and Saphya Council recently opened up at New York University Shanghai as a part a series of works developed by members of Interactive Installation, taught by Eric Hagan.

On The Catwalk is a project inspired by a multitude of interests that pertain to each group member. The development of the piece came through the collaboration and the agreements made. Saphya wanted a piece that involved any viewers and allowed them to interact with the project. Jia wanted to have some sort of projection mapping or anything that would invoke feelings of being in a different environment from any viewers of the piece. Kevin wanted to involve some sort of motion tracking/sensing, that would allow a piece to not function when standing alone, but would turn on when a viewer interacts with it. After thorough discussion amongst the members, it was boiled down to an idea that the piece should create an environment on its own that would allow for users to trigger it through physical involvement. A suggestion was made that perhaps the project could be that of a bridge that lit up when users walked on it and the idea was adapted because of the information given that the piece had to be easily portable. So the idea changed to that of a walkway that lit up when people walked on it. The idea of lights flashing wherever and individual was gave way to the inspiration that the piece could represent that of a model runway. The lights would represent lights from paparazzi camera flashes. An environment would be crafted by closing off the piece, segregating it from the rest of the projects via whiteboards and walls. Black fabric would be attached to the whiteboard and wall in order to better simulate the walkway environment. This would all be topped off with a real camera that would take a picture at the end of the walkway.

So the goal of the piece is to simulate an experience that would give users the feeling of what walking down a runway would be like. The entire space is closed off and darkened because when performing on a stage or a walkway, due to a dimmed environment, performers are not meant to be able to see anyone in the audience. What is more likely is that they will see lights from camera flashes. So On The Catwalk taps into this sensation by creating that dark environment. When the user walks down the walkway, each step they take has LED’s that flash, along with camera sounds that directly correlate to the user’s position on the board. That is all the user experiences and gets to see until the end of the walkway is reached. Once the end is reached by a user, a real life camera is sent a signal that allows for it to trigger and a flash photo is captured. Prolonged occupation of that space by a user results in the camera continuing to take photos.

In order for the installation to work at its best, the location and equipment were instrumental. In order to create a piece that could be closed off and separate from the rest of the projects, the location was important. It needed to be a place where two walls existed, ideally the corner of a room. Whiteboards would be used to create a third wall, thus closing off the piece and forcing users to enter from one area like we would like. As for the equipment, the Arduino and breadboard were needed in order to make the LED’s work. The walkway, constructed in the woodshop, was sectioned off into 6 pieces. Each piece had its own connection to the LED’s or the camera. In order to make the connection between the walkway and the LED’s/camera work, there needed to be a way to create that connection on the walkway. The walkway was first lined with copper tape, subsequently a layer of styrofoam was placed on top that was only placed on the edges and to separate the sections. So at this point, the board had six sections that had Styrofoam borders and gaps in the middle that held copper tape. This was done so cardboard that was also lined with copper tape could be placed on top of the Styrofoam. This was done so the Styrofoam could effectively create a barrier between the pieces of copper tape. When the copper tape touches each other, since the walkway copper tape and cardboard copper tape are effectively separate, the connection is made, enabling LED’s/camera to trigger.

As for the six sections, five of them were dedicated to the LED’s. Due to limitations of the Arduino, each section could only hold up to three LED’s, giving a total of 15 LED’s. The final section at the end of the walkway, held the trigger to the real camera flash and capture. The camera is triggered by an infrared emitter that shoots out infrared signals, which are then picked up by the Canon 70D’s infrared sensor. A delay was put into the code so that if a user stays on the camera fire section, every two seconds, the camera would fire again, simulating the model experience.


TIMELINE

  • November 28th – in class ALL

o   Cut wood, styrofoam

o   Drilled wood together

  • November 29th – Kevin + Saphya

o   Drilled top board on

o   Put down copper tape

o   Put on styrofoam

  • December 1st – ALL

o   Copper tape put on cardboard

o   Cut up cardboard

o   Solder wires to catwalk

o   Tried to work on code

  • December 2nd – Jia + Kevin

o   Moved supplies

o   Cut red fabric

o   Worked with Eric for camera + LED code

  • December 3rd – Jia + Kevin

o   Rework code

o   Cut up black fabric

o   Put fabric up on wall

o   Test aluminum foil conductivity

o   Rewired breadboard and arduino

  • December 4th – ALL

o   Redo wiring with resistors to get pressure sensors and LED’s working

o   Strip wires

o   Lay LED’s on cardboard

  • December 5th – ALL

o   Add in camera firing action

o   Fix red carpet to catwalk

o   FINISH


PHOTOS/VIDEO

  1. Build Process

img_5036img_504315310961_10205887542146569_1925396640_o




15293450_10205889793522852_382143359_o 15311516_10205889792962838_2075501059_o


15354179_10205900081380042_362433290_o15322601_10205911743111578_53804626_o15322506_10205911743231581_405470554_o15302341_10205911817873447_1063651613_o15322504_10205911817793445_1181577987_o15303790_10205911817753444_889931410_o

2) Pictures of users of project

img_5257 img_5258 img_5277 img_5285

 

 

 

 

 

 

 

 

 

 


CODE

Arduino Code:

Processing Code:

Sarabi: I^2 Week 11- Installing Projection Mapping

During Class, we were asked to experiment with Mad Mapper. No one in my group (Nicole and Jingyi) had done projection mapping before. To get a feel for it, we decided to create a room full of images. Though we were only projecting onto a piece of styrofoam, the idea was that if the images were projected into a room, the participant would be completely surrounded by the images.

Using Mad Mapper was simple enough, but we ran into one critical issue: Mad Mapper doesn’t support the importing of more than one video at a time. In order to get more than one video working, they first needs to be cut together in a video editing program. While editing the videos together would be easy enough, they’d also need to be arranged in the frame in such a way that they don’t get distorted in Mad Mapper. Seeing as this was an in-class project. We decided to simplify our approach and use the same image on all sides of the cube. Mapping went smoothly after that. We first tested our cube with an image of our professor, then we switched it for a video of Nicole and her friend dancing. The result was quite amusing.

eric nicole

Sarabi: I^2 Week 5- Installing Locations

Our group chose the covered courtyard that are near the bicycle pathway on Century Avenue. The long avenue has several of these small gardens, which are roofed with wooden slats and covered with greenery. People often use the covered spaces to rest on the benches and get away from the sun and rain. This space is occupied during all types of weather; in the summer people use it to take refuge during the heat, when it’s raining people hide under the leaves and slats in hopes of being kept at least a little bit dry. In general, though, people don’t interact with the space, and its easy to walk by it without noticing it at all. Our group would like to change that.

[Week 13] The Flying Chair Experience Final Documentation

thumb_img_2370_1024 thumb_img_2373_1024

img_2380 img_2382

img_2395 img_2396

img_2398 img_2401-1

img_2400 img_2402 img_2403 img_2405


Mary Kate and I began coming up with ideas for this project after we found out we were paired together, way before we given the assignment to write our proposals. We talked at the end of classes, discussing what we wanted to include in our project. Mary Kate really wanted to build a chair using the wood shop and try something including projection mapping, and I thought it would be fun to experiment as well. I wanted to add some color to the project, and for some reason, the movie Up popped into my head, and I thought it would be cool to project flying videos – and somehow all of this resulted in our final idea. I’m not even sure anymore exactly how it went, but we wrote down everything we thought of early on.

We made a Google Doc on which we shared our ideas, questions, and thoughts. After Mary Kate discussed our project with Eric, we realized we wouldn’t have enough time to build a chair. When we were tasked with creating a timeline for our project, we wrote on the document our priorities for the minimum viable product. Being the only team of two and having a very short deadline, we prioritized finding and compiling videos into a Processing sketch, building our cubicle, and figuring out where to position the projector.

This went fairly smoothly at first. We met with Eric to examine the wood we were being provided with and had already made a sketch of what the cubicle should look like and the dimensions of some its parts. Initially we had planned to cut out 3 beams of wood for the frame, but we determined it would be better to have 4 for the sake of creating our desired shape. We decided to make it 2 meters high, so we reserved a wood shop time slot to cut out the 4 beams we were going to stand up vertically to create the frame of the cubicle. We also cut out pieces for the base. Meanwhile, during Thanksgiving break, we found 2 videos each that we thought would be suitable for project. We found that it was pretty difficult to find appropriate videos; they had to be in first person view, over appealing landscapes. We ended up choosing videos that were mostly of drone footage. Later, when working in class, we edited these videos in iMovie to cut out any unwanted title shots that would detract from the experience, and inserted these into our Processing sketch – which was giving us trouble. Mary Kate had begun writing the code before class, and afterward we worked on it together in class. We edited the code so that the videos played, a different video each time, when the light sensor was triggered instead of when Q was pressed on the keyboard. We had difficulties because we kept getting a NullPointer error, and the light sensor values we were getting kept changing to 0, which caused the video output to constantly stop and restart. We managed to fix this problem later.

We met with Eric in the wood shop again on Thursday to begin assembling part of our cubicle. We drilled pilot holes in the pieces to screw them together, taking turns using the drill and screwing the parts together. It was a slow process, but we were getting there. Then, I made a trip to the Shi Liu Pu Cloth Market near the Xiaonanmen metro station to find us some white fabric for the cubicle. It was my first time buying fabric, so I was slightly intimidated by the initial prices some vendors told me. After a bit of a search, I ended up buying several meters of relatively thick, somewhat stretchy, nice material, feeling glad I hadn’t bought into ridiculously high prices.

Mary Kate and I met up again on Sunday to continue constructing the rest of the cubicle. We measured the angles for the supporting pieces of the base and had help from Eric, but this proved to be a ridiculous task – we measured the angles and cut out pieces many times, but for some reason, we just weren’t getting pieces that lined up properly. It was pretty baffling and frustrating. Eventually we gave up, and screwed the ends of the not-so-fitting parts together, and then screwed the supports to the base. Constructing the cubicle was a tedious process and took longer than I thought it would. Assembling the top beams of the frame was difficult because both Mary Kate and I are short, so we used a stepladder, though it was still somewhat precarious. Finally, we were ready to add the cloth, which was the last step in making the cubicle. We measured out 1 meter by 2 meter lengths of cloth for the walls and cut them out. We fixed the cloth to the cubicle using a hammer and small nails. We wrapped the cloth around the wooden beams, making sure to keep it stretched out, and nailed them to the back of the beams so as to prevent the nails from being visible to the user.

Later on Sunday night, I looked up how to link our Processing sketch to MadMapper, and found a tutorial on using Syphon. So on Monday morning, Mary Kate and I met up again to set up the projector and link our sketch to MadMapper, but we encountered many difficulties. We were having trouble setting up the projector I had checked out from the IMA lab; it seemed too small of a coverage, so Mary Kate checked out a bigger projector from ATS. Nevertheless, because of the limitations of the space we were given, we couldn’t figure out how best to place the projector so that it projected onto a majority of the walls instead of a relatively small portion of them. We also had trouble using Syphon to link Processing to MadMapper; it seemed playing more than one video was too much. Mary Kate set up the light sensor with the Arduino and placed it under a seat mat on the chair we had brought over from another room. We tried to get our sketch to cover the whole screen of the laptop, but weren’t able to. The feedback we got during presentation was useful; it would be a good idea to move the projector behind the cubicle and project onto it that way, use a different method to get our sketch to work with MadMapper, tie balloons or add wings to the chair to indicate that the installation is meant to be a flying experience, use a chair that doesn’t have wheels or that can move around and disrupt the viewing experience, and move the light sensor so that the user doesn’t have to cover the back of the seat to trigger the videos to play, or use another sensor, such as a pressure sensor.

More details about the project can be found here: Project Details