Documentation for Midterm Project User Testing

Documentation week 6: Midterm Project User Testing

Documentation by: Jeffery Xing

Instructor: Rudi

Partner: Mike Jung

This week, we introduced our projects to the class in groups. Our project failed to meet the hopes that our group had for it. The night before creating the model, we set up 3 vibration sensors that were working fine on a threshold = 1. However, the day of the midterm project user testing, our sensors suddenly became unresponsive and we had to shut down our stand. Luckily, we still have hope to turn our project around with motion sensors instead of vibration sensors.


Above is the trial we had on the day of presenting the projects. I was able to record most of what our user testers said to us and was actually brought with a lot of constructie criticism. Below is all of our notes that I took during the user testing session.

Documentation October 19, 2018

  • The four elements catch the eye and the 
  • So what is the main point of the interaction
  • How many people can play
  • Why are there three targets = more area to hit and connected 3 different sensors to be more innovative
  • What are the things from last week in class that we included in the project
  • Paint some kind of design with illustrator instead of drawing 
  • When doing the laser-cutting we can create a better box 
  • Perhaps we can create a model with two motion sensors and instead of the unresponsive sensors we can track the accuracy of the cards.
    • Two different breadboards and a gap in-between two motion sensors
    • We can even make several motion sensors and a large wall with holes in it
  • Make the intuition of the game more clear to the users

In our source code, we used example “Knock” and another example code online for our 7-segment display. Some of the code I put into // because we did not need it but we did not want to delete it right away.

// these constants won't change:
const int ledPin = 13;      // LED connected to digital pin 13
const int knockSensor = A0;
const int knockSensor1 = A1;
const int knockSensor2 = A2;
const int threshold = 2;  // threshold value to decide when there is enough pressure

//Define the Pinout of 7 segment display
int a = 7;
int b = 6;
int c = 5;
int d = 11;
int e = 10;
int f = 8;
int g = 9;
int dp = 4;

// these variables will change:
int sensorReading = 0;      // variable to store the value read from the sensor pin
int sensorReading1 = 0;
int sensorReading2 = 0;
int value;

//Display Number 1
void digital_1(void)
  unsigned char j;
  digitalWrite(c, LOW); //Set C Segment to low which lights this segment
  digitalWrite(b, LOW); //Set B Segment to low which lights this segment
  for (j = 7; j <= 11; j++) //Set the rest Segments to high which turn off this segment.
    digitalWrite(j, HIGH);
  digitalWrite(dp, HIGH); //Turn off DP segment (the little dot on the right down part)

//Display Number 2
void digital_2(void)
  unsigned char j;
  digitalWrite(b, LOW);
  digitalWrite(a, LOW);
  for (j = 9; j <= 11; j++)
    digitalWrite(j, LOW);
  digitalWrite(dp, HIGH);
  digitalWrite(c, HIGH);
  digitalWrite(f, HIGH);
//Display Number 3
void digital_3(void)
  unsigned char j;
  digitalWrite(g, LOW);
  digitalWrite(d, LOW);
  for (j = 5; j <= 7; j++)
    digitalWrite(j, LOW);
  digitalWrite(dp, HIGH);
  digitalWrite(f, HIGH);
  digitalWrite(e, HIGH);
//Display Number 4
void digital_4(void)
  digitalWrite(c, LOW);
  digitalWrite(b, LOW);
  digitalWrite(f, LOW);
  digitalWrite(g, LOW);
  digitalWrite(dp, HIGH);
  digitalWrite(a, HIGH);
  digitalWrite(e, HIGH);
  digitalWrite(d, HIGH);
//Display Number 5
void digital_5(void)
  unsigned char j;
  for (j = 7; j <= 9; j++)
    digitalWrite(j, LOW);
  digitalWrite(c, LOW);
  digitalWrite(d, LOW);
  digitalWrite(dp, HIGH);
  digitalWrite(b, HIGH);
  digitalWrite(e, HIGH);
//Display Number 6
void digital_6(void)
  unsigned char j;
  for (j = 7; j <= 11; j++)
    digitalWrite(j, LOW);
  digitalWrite(c, LOW);
  digitalWrite(dp, HIGH);
  digitalWrite(b, HIGH);
//Display Number 7
void digital_7(void)
  unsigned char j;
  for (j = 5; j <= 7; j++)
    digitalWrite(j, LOW);
  digitalWrite(dp, HIGH);
  for (j = 8; j <= 11; j++)
    digitalWrite(j, HIGH);
//Display Number 8
void digital_8(void)
  unsigned char j;
  for (j = 5; j <= 11; j++)
    digitalWrite(j, LOW);
  digitalWrite(dp, HIGH);
void setup()
  int i;//Set Pin Mode as output
  for (i = 4; i <= 11; i++)
    pinMode(i, OUTPUT);

// void setup() {
//  Serial.begin(9600);       // use the serial port
// }

void loop() {
  // read the sensor and store it in the variable sensorReading:
  sensorReading = analogRead(knockSensor);
  sensorReading1 = analogRead(knockSensor1);
  sensorReading2 = analogRead(knockSensor2);

  // if the sensor reading is greater than the threshold:
  if (sensorReading >= threshold || sensorReading1 >= threshold || sensorReading2 >= threshold) { // This is a conditional statement...
    value = value + 1;

// all of the below is just making boom boom every time boom the light changes

  if (value >= 8) {
    value = 0;

  if (value == 1) { // If this is true..

    // this happens...


  if (value == 2) { // If this is true..



  if (value == 3) { // If this is true..



  if (value == 4) { // If this is true..



  if (value == 5) { // If this is true..



  if (value == 6) { // If this is true..



  if (value == 7) { // If this is true..



//  if (value == 8) { // If this is true..
//    digital_8();
//  }


  //  if (sensorReading1 >= threshold) {
  //    // send the string "Knock!" back to the computer, followed by newline
  //    Serial.println("2");
  //  }
  //  if (sensorReading2 >= threshold) {
  //    // send the string "Knock!" back to the computer, followed by newline
  //    Serial.println("3");
  //  }
  //  delay(500);  // delay to avoid overloading the serial port buffer

Tyler Roman- Project 1- The Tortoise and the Hare Remastered

Hey everyone! This is the documentation for my simple communication system

System Pieces:

The Hare- thinks about the trophy or food

The Tortoise- only thinks about winning the trophy

The board- the location of the story and the playing space

Seven Carrots- the bait to take the hare to the alternate path


Historic Beginnings

I think my initial idea began when Professor Krom initially brought up the example of a game for the simple communication system. Though an erroneous decision in hindsight, I had always had a love for games growing up and so the idea of creating a game for the project immediately caught my interest. As I watched the demon characters work their way across the board, my mind raced for inspiration and fittingly I recalled one of the stories from my childhood the story of the Tortoise and the Hare. As I thought about it more and more I was determined to find a way in which to communicate this time-tested story in a way that could be communicated through a simple game.

The First Leg

The first challenge I had was determining

a way to way to make the tortoise win and yet give the impression that either the hare should win or at the very least that the hare was much faster than the tortoise. The first solution came about in the use of a die. Using two-sided paper die with one dot on one side and two on the other I made it so initially the tortoise would not move on the first turn and then every subsequent turn it could only roll one die for its movement, but after the first turn it would move for seven turns in a row while the hare “slept” before sleeping itself for two turns. On the other hand, the hare would be able to move three dices on every turn, but would only be able to move on the first turn before sleeping for seven turns and then finally being able to move two turns in a row. This was the solution that would allow the tortoise to win almost every time unless they were unlucky but also

remain pretty accurate to the story.

Round the Bend

However, during the initial playtesting, I had to face facts that my system just was not cutting it. The game itself was not simple and it was not very fun either. The use of the dice was traditional and yet it required some detailed verbal explanations to even explain how to play the game and that was a serious no go. Going back to the drawing board I took all the advice to heart and went about redesigning the entire thing in order to make it more natural and more fun. This resulted in

the second iteration which I demonstrated for the class for the presentations. Here, I learned that we were not supposed to dice at all and that floored me pretty hard, simply because I thought it was pretty enjoyable to just shake the paper “die” and it seemed everyone else did too. However, this time was not without its triumphs. I think everyone, for the most part, appreciated the new design and the various little characters that I had placed on the “board” and that was something that brought a lot of warmth to my heart. The feedback I got from that time was to remove the idea of dice in its entirety, to clarify the move/do not move status of the hare, and to have more interactions with the outside characters. And thus, brought about the third and final iteration.

Home Stretch

In this final iteration, I removed the idea of the dice and even the idea of resting hole for the hare, thus simplifying things in exchange for removing the hilarious losing of the Hare in the race. Instead, I opened a separate path for the Hare itself that would lure the Hare away from the main track using cleverly placed carrots left by the supporting characters of the story. This allowed the hare to take such a long time that the tortoise would naturally win. Looking back I think this assignment was a ton of fun and a perfect example of the power of iteration and the Minimum Viable Product. Working, testing, and playing again and again until we have reached the end goal. Looking forward I hope I can continue to work on this project and bring it to other forms of media in the future.


29th April Responses

Response to Cerecares Field Trip

What I loved about Cerecare most is how they were determined to see their patients not by their disabilities, but just as people who had skills and capabilities, who could and wanted to learn. A great example of this is when we were told that they were trying to train some of the older children in massage therapy. They were seeing them for their strengths, which many assistive techs rely upon, not just their weaknesses.

In terms of our course work, I think the visit was necessary because it reinforced the idea that low tech options work in practice and shouldn’t be ignored. I thought it was great how they were using a local carpenter to recreate items they found online that were designed to help with or resolve certain challenges. Seeing these solutions also reminded us that there are already some things created to assist with certain conditions. Knowing this makes it more imperative that we as creators are always looking forward to try to create solutions that are either cheaper and faster to create or quantifiably better. Communicating with the caretakers was also important because it allowed us to hear first-hand what they identify as being the most pressing issues that need remedies. Hearing this is vital because too often it is the case of able-bodied individuals to imagine problems that may not exist or may not be the most pressing.

With this said, I also shouldn’t underestimate how important it was to come in as an outsider with a fresh set of eyes. This was a lesson I learned during my Nutrition class in Accra and that I’m happy to see has been reintroduced in this course. In terms of nutrition, it is sometimes hard for those inside of a society to identify their problems because they may know no difference. Likewise, it may be hard for those who deal with these children every day to fully acknowledge what might make life better or easier just because they may not acknowledge there is another way. Because of this I think it is important as outsiders to ask questions of the caretakers and be actively observing to see if you may be able to see something as an outsider that might be harder for someone in the inside. An example of this came when I asked our tour guide about whether it was a problem for the children to keep their heads up. To this question, our tour guide admitted that the current solution was just to hold the children up, but that it would be great if there was a solution for this.

Overall, I think that this trip has created a great jumping off point for me to start exploring what I might want to ideate as solutions.

Using Assistive Features 

For this task, I decided to take the switch controllers to try and search for a video on Youtube. Though there was a bit of a learning curve (for instance, I first misunderstood what was meant by position), once you learn how to use it, it becomes pretty intuitive to use. Though it takes longer to complete tasks, it is still in innovative means to solve a problem.

Everyday Technology Chart

Screen Shot 2017-05-03 at 8.34.59 AM Screen Shot 2017-05-03 at 9.12.46 AM Screen Shot 2017-05-03 at 9.12.52 AM

[Ix Lab AW] Final Project Documentation-Jinglan Meng

Project Name: Save Me If You Can!

Documented by: Jinglan Meng

Partner: Yanyu Zhu

Instructor: AW

Class Presentation Date: December 12th, 2016

Documentation Date: December 13th, 2016

Subtitle: A stress release interaction – feed candy to the puppet living on the computer by beating the drum in order to save her from the growing chain.

Project description: The “save me if you can” project is played by using a simple DIY drum and a computer screen. Players can interact with the little ball by beating the drum in reality. Once it is within the “mouth” area, beat the drum and you can catch the ball. You win this time. But you do not have any other competitors so that you will not lose because you will always have the chance to catch the ball then win. In addition, with the path the ball going along, it leaves a colorful trace on the background so that in the end, there will automatically be a painting if you catch the ball early. If you catch it quite late, it will look like chains and the puppet is banned in prisons, which looks very scary! Both ways improve user experience, and are quite fun. In this way, the goal of stress-relieving can be achieved.

Target: Our target audience are people who suffer from stress in everyday life and needs something fun and interactive to relax themselves (especially people from NYUSH community tortured by finals). We created our project with this idea in mind.

Here is the demo of our project: 



Our Drum! 



About the drum: The mouth on the surface of drum is exactly the same as the one on the screen because we would like make it look more lovely and user may feel more familiar with it. We also have bells surrounding the drum so that everytime you beat the drum, the sound is very beautiful. We made the colorful supporting bottom in order to fit the average height of people, or the drum would be too low. And these colors were intended to make users feel happy.

Screen Recordingscreenrecording

Here is me beating the drum to catch the ballsave-me-if-you-can

Introduction & Making Process: For this interaction lab final project, both Yanyu and I realized the immense potential of our mid-term project after getting advice from professors and classmates and our own deep self-reflection, and we are so lucky to be able to team up together again to perfect our crazy mid-term project.

Here are the limitations and directions for improvement we thought about after finishing our mid-term project from which we planned to better our previous one in the final.

  • We planned to use pixel to recognize color in order to restrain the ball within the body area (suggested by professor Moon).
  • We planned to experiment different ways of instantiating the “keypressed” move to a more physical one, such as pressing a really big button (suggested by professor AJ), using weight sensor to let the user jump on it, or a sensor that can track the user’s hand or body movement, and so on and so forth.
  • We also wanted to also ask our friends to experiment with the game and collect suggestions from them in order to create a game with the best user experience possible.
  • We were also thinking about including physics laws in the movement of the balls to make it more real.
  • We also wanted to create a story for our puppet to engage the users more, for example, the puppet dies form hunger of not eating enough candies (balls) in the required time.

We seriously considered all of those possible future directions while brainstorming for the final project. We also changed some of our ideas mid-way through because we found them not practical enough.

For the first direction for example, we originally planned to use “pixel” to restrain the moving area of the ball, we asked several professors for help and indeed at last made it happen. However, we found at last that if we use “pixel”, the running speed of Processing would be super slow because the “pixels” function decelerated the speed of the whole program, which is definitely not something we wanted to see happening. Therefore, although we spend a whole lot of time on it, we gave it up at last and though about resorting to other possible ways. It’s worth though because we learned a lot about the use of pixels that would have been achieved if not for this try. Then we thought about mathematical function and use distance function to restrain the ball and then we wrote the function (which is super complicated), checked for several times, only to find that it did not work and no one would bother to help us check it so we thought, no we do not really need to restrain the ball! Why not just let it bounce everywhere so the trace can form a chain and create the visual simulation of locking our puppet up if the ball is not caught for a long time, which just happen to be the story we wanted to create! So basically at last we just let the ball go anywhere it wants. And then for the second direction, we thought that it was a good idea to bring the virtual drum into real life, and we built a drum ourselves and put a vibration sensor and Arduino inside of it. For the forth one, we though that it would be such a mess to bring real physics law into our game and we asked several friends who all prefer the original one so we gave this idea up.

Conceptual development:


Inspirations: This project is a continuation from our mid-term project. We made an improved version of pin-ball game for our mid-term project with various kinds of rolling and bouncing and balls and dots. We got some really good suggestions from classmates and professors and we also thought very hard about how to improve the interaction on our own. We adopted professor AJ’s idea of making the key pressed button bigger so we moved the virtual drum to real life! We recycled some used paperboard from shoe boxes in a spirit of being environmental-friendly, cut and glued them together, pasted color papers on them and then we made a quite pretty drum! So the movement of clicking keyboard is transformed into beating the drum, making it become more interactive and stress-relieving than the previous one.

In order to create the best user experience possible, we came up with the idea of making a drawing machine, which is presented by the trace ball leaves in different colors. We met a lot of trails while doing this and finally due to professor Antonius’s help we deployed the method of add another canvas to the original one. And every time the ball gets eaten, we add another transparent background to the original one so that the previous traces will all disappear! Furthermore, we set the color of the trace to be according to the x-coordinate and y-coordinate to enrich the esthetic experience of the users.

Also, we applied what we learned in the second half of the semester and added a music to our project to further intrigue the users and it works!


Reading & Reference: We reviewed the class notes about the way to add a music piece, and also the notes to about serial communication to achieve the mutual code transportation between Arduino and Processing. What’s more, we even learned some new code from professors, for example, Professor Antonius taught us how to add another canvas to the original one by using PGraphics. Making this project gives us a wonderful opportunity to go over the code learned in class and enhance our memory of them by active application, and also the chance to learn some new code from professors that we never had a chance to learn and apply without working on this project. Really appreciate it.

Arduino: The process for connecting the circuit is not very hard. We add a buzzer to check if the sensor works without using processing. The “sensor_value” is used to check if it is working both in Arduino and Processing. These small tips are very important and efficient.


Lessons Learned:

Test & Trials & Errors & Know-Hows & Tips

  1. We experienced a lot of trials while doing the ball trace. We initially thought that since the ball originally has trace but its just that it is blocked by other drawings in the code. So we moved all the other static drawings from the function “void draw()” to “void setup()” in hope of creating the desirable trace. However, because we have too many drawings it was a really complicated and tough work, and after we have done that we find that the moving objects (eyes, the little moving ball) all had traces and what’s worse, they seemed irremovable because we asked several professors and did not work it out. Therefore, we abandoned this idea and began to search for other ones. We asked professor Antonius for help who introduced PGraphics to us. We thought that this function is really helpful for our project and applied it immediately and then it actually worked! However, we also wanted to remove all the previous traces when the ball is eaten because otherwise there would be too many chains on the screen. We tried several functions but they all did not work. We felt desperate and asked a professor nearby for help. We explained our project and the problem to him, and he suggested adding another transparent background every time the ball is eaten. We tried the function and it worked again! From this problem-solving process, we learned the lesson to not give up until we make it, no matter how difficult it is and how desperate we feel. Also, when we have no idea what to do by ourselves, it is a good idea to consult the professors.

2.Time management:  we did not make full use of our time during the process. We spent a lot of time on restricting areas by using pixels. But pixels decelerated the speed of the whole program. So we tried to use a mathematic equation in the “if” function. Later, with Antonius’ kind reminder, we decided to give up this idea because it cost a whole lot of time and does not do much help to the progress of our project. I learned the lesson that we should have made a priority list to remind us what thing to do first and what to do next, and also do not stick to one thing for a long time when there are other important things to do.

3.Teamwork: when we first teamed up to do the group work, we both thought that it is the most efficient that we split the work and each one focuses on different part and then we meet and incorporate them together. It might be a good idea but really depends on what work to split. For example, we split code writing for our mid-term project and it turned out that we had to spend much more time incorporating them. Therefore, this time when we were working on our final project, we decided to write the code together and split other work that could really be divided such as the making of the drum and so on. It worked at last.

Limitation + Future development direction:

Based on suggestions from professors and other users, we were happy that most people like our project. However, as professor Antonius pointed out in class, we need to let the mouth close when the drum is beaten because otherwise the players would not know what caused the ball to not be eaten, is there something wrong with the program or something wrong with their skills. Our project is a little bit hard to play, therefore, it becomes important to let the user know that the program is running correctly.

And then here is our code: (the main tag is in the source code!)

Tag2: Draw:
// Send Data from Processing to Arduino
// This code is for the Processing IDE
import cc.arduino.*;
Arduino arduino;
class Ball {
// declare the Ball variables
int x, y;
int xSpeed, ySpeed;
int c;
int counter=0;boolean check=false;//to make the ball disappear after it is eaten
// constructor
Ball(int parameterX, int parameterY) {
// pass the initial variables to the ball
x = parameterX;
y = parameterY;
// give initial default values to the other variables
c = 20;
xSpeed = 0;
ySpeed = 20;
//if(y == 0){
// xSpeed=5;
// ySpeed=4;
//} i once wanted to set the speed of ball after it shoots out here but later i found it not global. so i keep it in the move function part.
// Draw the ball
void render() {
println(“render ” + counter);
if (counter>0){
ellipse(x, y, c, c);
// Move the ball
void move() {
if (x>width||x<0) {
xSpeed = xSpeed*-1; // to make it go back
if (y>height||y<0) {
ySpeed = ySpeed *-1; // to make it go back
if (y==0) {
xSpeed=(int)random(9, 15); //at first, the speed was concrete, which makes the move very inflexiable.
ySpeed=(int)random(6, 15); //so i set a random function and make it from float to int.
if (counter>0){
pg.noStroke();pg.fill(150, y, x, 140);
pg.ellipse(x, y, xSpeed, xSpeed);
image(pg, 0, 1);
//explode the ball
void explode() {
if (mousePressed&&dist(mouseX, mouseY, x, y)<=c) {
bc = color(0, 0, 100);
void eattheball() {
////vibration control
if (val > 0 &&dist(x, y, 250, 260)<=58) {//since the mouth is not a circle so it is hard to locate precisely
//we would like to improve it by useing “pixel” in the future.
fill(248, 227, 178);
arc(250, 260, 220, 220, 0, PI);
//check=true;//to make the ball disappear after it is eaten
//println(“eat the ball”);pg.beginDraw();
pg.fill(150, y, x, 140);
pg.ellipse(x, y, xSpeed, xSpeed);
image(pg, 0, 1);
void resetTheBall() {
// give initial default values to the other variables
if (mousePressed) {
c = 20;
xSpeed = floor(random(9, 15));
ySpeed = floor(random(6, 15));
pg.fill(150, y, x, 140);
pg.ellipse(x, y, xSpeed, xSpeed);
image(pg, 0, 1);
Tag3: Particle(the splash)
// A simple Particle class
class Particle {
PVector position;
PVector velocity;
PVector acceleration;
float lifespan;
Particle(PVector l) {
acceleration = new PVector(0, -5);
velocity = new PVector(random(-100, 100), random(-80,80));
position = l.copy();
lifespan = 255.0;
void run() {
// Method to update position
void update() {
lifespan -= 1.0;
// Method to display
void display() {
fill(255, lifespan);
stroke(100, position.y, position.x, lifespan);
ellipse(position.x, position.y, 8, 8);
// Is the particle still useful?
boolean isDead() {
if (lifespan < 0.0) {
return true;
} else {
return false;
Tag4: particle system
// A class to describe a group of Particles
// An ArrayList is used to manage the list of Particles
class ParticleSystem {
ArrayList<Particle> particles;
PVector origin;
ParticleSystem(PVector position) {
origin = position.copy();
particles = new ArrayList<Particle>();
void addParticle() {
particles.add(new Particle(origin));
void run() {
for (int i = particles.size()-1; i >= 0; i–) {
Particle p = particles.get(i);;
if (p.isDead()) {
Here is the Ardunio code:
#include “pitches.h”
int sensorValue;
void setup() {
// put your setup code here, to run once:
void loop() {
// put your main code here, to run repeatedly:
sensorValue = analogRead(0);
if (sensorValue > 0) {
tone(8, NOTE_C4, 1000);
} else{

Then the Arduino code:

#include “pitches.h”

int sensorValue;

void setup() {
// put your setup code here, to run once:

void loop() {
// put your main code here, to run repeatedly:
sensorValue = analogRead(0);
if (sensorValue > 0) {
tone(8, NOTE_C4, 1000);
} else{

import processing.serial.*;
import processing.sound.*;
SoundFile file;

PGraphics pg;

Serial myPort;  // Create object from Serial class
int val;      // Data received from the serial port

//to start a new game
float speed=0.2;//the speed of the rotating eyes
float a;//angle of the rotating hand
int c = 600;//vertical coordinate of the ball bouncing up and down
float speed2=5;//speed of the ball bouncing up and down
int w; //width of the ball bouncing up and down
int h;//height of the ball bouncing up and down

float[] size = new float[100];//create a new float "size"
float[] rot = new float[100];//create a new float "rot"
float[] s1 = new float[100];//create a new float "s1"
float[] s2 = new float[100];//create a new float "s2"
int[] col = new int[100];//create a new int "col"

color[] palette = {
  #EFFFCD, #555152, #DCE9BE, #2E2633, #99173C
};//the five colors for the eyes
float theta;

ParticleSystem ps;
Ball b1;
//Ball b2;
boolean showpipe=true;
color bc= color(255);

void setup() {
  size(500, 700);
  pg = createGraphics(500, 700); // make a new canvas not shown here
  file=new SoundFile(this, "135.mp3");;
  //serial communication
  String portName = Serial.list()[Serial.list().length - 1];
  myPort = new Serial(this, portName, 9600);

  ps = new ParticleSystem(new PVector(width/2, 600));

  b1 = new Ball(width, 300);
 // b2 = new Ball(width/2, height/4);

  float Sz = 0;//set Sz to be 0 in the beginning
  strokeCap(SQUARE);//set the line ending to be square shape

  ///////////////////////////////the left eye
  for (int i=0; i<100; i++) {//set integer i, 
    //i equals to 0 in the beginning and keep increasing when i is less than 100
    Sz += random(0.01, 0.3);///size of left rotating eyes
    size[i] = Sz; 
    rot[i]= PI/100*i;//angle of each rotating
    col[i] = (int) random(0, 5);//each time choose color form the palette
    s1[i] = random(0, TWO_PI);
    s2[i] = s1[i] + random(PI/4, PI);
  ///////////////////////////////the right eye
  for (int q=0; q<100; q++) {
    Sz += random(0.6, 0.8);///size of right rotating eyes
    size[q] = Sz ;
    rot[q]= PI/100*q;
    col[q] = (int) random(0, 5);
    s1[q] = random(0, TWO_PI);
    s2[q] = s1[q] + random(PI/4, PI);
void draw() {

  while ( myPort.available() > 0) {  // If data is available,
    //    val =;         // read it and store it in val
    String myString = myPort.readStringUntil('n');
    if (myString!=null) {
      myString = trim(myString);
      val = int(myString);
      //println("sensor value: ", val);

  //  bouncingball
  background(203, 173, 110);
  translate(150, 210);
  for (int i=0; i<100; i++) {
    stroke(palette[i%5], 100);
    arc(0, 0, size[i], size[i], s1[i], s2[i]);

  translate(200, 0);
  for (int q=0; q<100; q++) {
    stroke(palette[q%5], 100);
    arc(0, 0, size[q], size[q], s1[q], s2[q]);
  theta += 0.383;///rotate speed

  /////////////////////BOUNCING BALL

void bouncingball() {

  // draw the ball
  if (!b1.check) {
  // move the ball
  //explode the ball
 // b2.explode();

void Drawing() {
  fill(248, 227, 178);
  ellipse(250, 150, 420, 300);//layer hair

  fill(84, 49, 80);
  ellipse(250, 180, 400, 305);//upper hair

  fill(248, 227, 178);
  ellipse(320, 440, 260, 200);//arm
  ellipse(180, 440, 260, 200);//arm
  fill(203, 173, 110);
  ellipse(320, 440, 130, 130);//arm
  ellipse(180, 440, 130, 130);//arm

  fill(248, 227, 178);
  ellipse(250, 210, 390, 340);//head
  triangle(250, 780, 130, 300, 380, 300);//body

  fill(134, 133, 113);
  ellipse(150, 210, 40, 40);//eye
  ellipse(350, 210, 40, 40);//eye
  ellipse(150, 210, 30, 30);//eye
  ellipse(350, 210, 30, 30);//eye

  fill(9, 47, 120);
  ellipse(250, 15, 30, 20);//head decoration 

  fill(35, 120, 35);
  ellipse(200, 20, 20, 20);
  fill(4, 120, 127);
  ellipse(300, 20, 20, 20);//head decorations 

  fill(221, 46, 12);
  ellipse(150, 35, 20, 20);
  fill(245, 137, 45);
  ellipse(350, 35, 20, 20);//head decorations 

  fill(111, 168, 138);
  ellipse(100, 64, 20, 20);
  fill(64, 128, 31);
  ellipse(400, 64, 20, 20);//head decorations

  fill(8, 48, 123);
  ellipse(65, 102, 20, 20);
  fill(224, 41, 91);
  ellipse(435, 102, 20, 20);//head decorations

  stroke(84, 49, 80);

  line(310, 540, width/2+35, height/2+210);//the stable hand
  ////////////////below is the trace

  //////////the beating hand
  translate(width/2-55, height/2+190);
  line(0, 0, 60, -25); 

  int c=#B40505;
  int b=#B40505;//color of mouth
  line(140, 260, 360, 260);
  arc(250, 260, 220, 190, 0, PI);//mouth, it will disappear if being touched
  rect(218, 261, 30, 37, 3, 6, 12, 18 );//teeth
  rect(255, 261, 30, 37, 3, 6, 12, 18 );//teeth

  ellipse(250, 560, 100, 20);
//////////draw the ball bouncing up and down
void keyPressed() {
  fill(250, 234, 201);
  ellipse(430, c, w, h);//body of ball
  ellipse(417, c-2, 5, 5);
  ellipse(432, c-2, 5, 5);//eyes of ball
  line(430, c-20, 430, c-30);//antenna of ball
  ellipse(430, c-30, 10, 10);//head decoration of ball

  c-=speed2;//ball bounces up 
  if (c>=670 ||c<535) {
    speed2=speed2*-1;//when reach the arm ball bounce back down
  if (c>=660) {//when ball is on the ground
    h =34;
  } else {
    w =50;//ball change its shape becomes flatter

The Flying Chair Experience – Part 5

The day of:

We had our cubicle ready and we borrowed a chair from a classroom nearby. We had our sensor working with the sketch and I managed to figure out the “0” bug from before 😀 So no more constant flashing back to black screen unless the sensor was not covered.

I brought a seat pad to hide the sensor underneath and tested a few times and it was okay, worked well. We borrowed several projectors to project on the white surface and this process was more difficult than expected. Very difficult.

First, we could not figure out the best place for the projector because being too close to the fabric, it would not extend enough. We tried bringing a table behind the chair to sit the projector on it, we tried placing it underneath the chair, and next to the chair…It was hard to find the perfect place but after the suggestion of Eric, we decided to have it beside the chair elevated on the ground because that was the best we could provide.

Cubicle and chair and arrangement of projector and computer:

img_2402 img_2405

Sensor and picture of it working as Ferwa is sitting on the chair:

img_20161205_125021 img_20161205_125219 img_20161205_144051

Second, for some reason Processing was not working with Madmapper. We looked up videos and information on the web and followed instructions. I somehow managed to make madmapper sense the Processing sketch and it looked like it was available, but nothing was projected onto the screen in mad mapper. That was weird and we did not know what to do, so we after fiddling with trying to figure this out, we decided to just project normally. With that, it was hard because we could not map the edges and somehow the projector was not exactly projecting the way we wanted so the experience was not quite there yet.

After presenting the work, I felt that we did a good job for what we could do until Monday. We struggled with things here and there but we pulled through, we managed to build a cubicle, and we had the projection going which was more than half of what we had planned for the final. I think that we had good team work, and definitely can improve on this project a lot.

Video of user interaction:

We also received very useful feedback and comments on the project which hopefully we will be able to implement some of them for the show. One of the main ones was to move the projector to behind the cubicle so that it projected on the outside of the fabric but could also be viewed from the inside which would be very helpful for the experience we want to go for. We also got suggested to use pressure sensors for the trigger as the person sits down or some sort of conductive fabric cushion for the chair so that the user does not know there is a sensor and the surface area for the sensing to happen is larger than a single small light sensor. Other suggestions and feedback we received included making the experience more immersive by implementing balloons or wings and goggles for the user to put on as he sits on the chair.

I think that there definitely is room for improvement and it will be good to see how this project progresses.

Some final reflection thoughts:

I really enjoyed everyone’s projects and I think as a class, we progressed through the course with struggles here and there, getting the wood shop, availability of materials and spaces, but we all did something by the end of the course. We all did something bigger than we would have been able to do in any other class.

This class has taught me to look at space differently, to experience space differently and make me ask myself questions as I walk the streets in Shanghai. I definitely feel that my perception of walking down streets has changed, and that the way I look at interaction has also changed. I am also a Resident Assistant, and we have to make bulletin boards, and one of the recent comments I received from my supervisor was that some of my boards involved interactions with the audience that was not the obvious one (make residents write things on the board) but they were always different for each of my bulletin boards and they did not require writing. I reflected on this, and it feels to me that this is not a coincidence, I do look at space differently and try to find ways in which people can interact with a space differently.


The Flying Chair Experience – Part 4

The class was assigned a new space Room 933!

We had most of the wood cut out and drilled together and we were placing them in the shape that we wanted before cutting the base support that would hold all the tall beams together.

Picture of progress and shape of cubicle:

img_2395 img_2398 img_20161204_132910

In the days before the final project was to be shown, we were constantly up in room 933 and down in the woodshop cutting materials bringing it back up and drilling them and sticking them together.

It was nice to see all the groups there working and it was just a nice building environment because for me personally, it felt that we were all in this together as a class and I felt supported. So although the work was tiring and demanded a lot, it was relatively nice to take note of the progress as the time passed and to see how everything was slowly coming together.

More pictures of the cubicle getting build up:


(We had originally drilled the top bars between each other and then realised that this was not the best way and had to unscrew the top connecting parts and then drill them from above – which made more sense so the thin piece of wood was resting above the beams)

One of the challenges and things that I took away from those intense work days was: to make sure that your hands were out of the way when you used the drill and that the surface was stable and steady before drilling, as well as you can build things in any position that is comfortable for you as long as you are steady. I think that after this, I will not be too afraid to step on ladders and do things like drilling sideways and being on my knees crouching, as long as I made sure the drill was straight.

Because we couldn’t place our materials on something so that we could clamp it secure, it was hard to find the right position to drill in, especially being a two people team, we both had to hold onto the wood tightly so that one of us could drill the hole.

Another challenge that was partly amusing was that we were not only two people on the team, we were also the shortest, and dealing with 2m tall beams and trying to hold it in place while drilling and hammering nails was very hard and demanding. BUT we made it and I am extremely grateful to my right hand for having stayed with me while I hammered at least 20+ nails. 😀

We bought a long piece of white fabric that we cut into 2m10cm long so that we could stretch over the wood and nail it in place. That took a while and my arm is a bit sore but I am glad we took a lot of effort and meticulously did the work because we wanted it to be good.

Pictures of the fabric and nailing process:


Nailing, nailing, nailing, hammering, hammering, hammering

img_2400 img_2401

The Flying Chair Experience – Part 3

Tuesday 29th November work update:

We had made this sketch drawing of how we wanted our cubicle to look like. We had the beams cut out and now it was just time to measure out the actual space and get the dimensions in order to make the support between the 2m beams. On the sketch drawing (which we carried everywhere + occasionally forgot in the Wood shop), we said we want our cubicle to be a nice trapezoid. So we figured out the length, now we had to figure out the angle so that we could make a support at a later stage for it. Who would have thought that math knowledge would be so important now (haven’t taken a math class in 3 years) D:. BUT we did some trigonometry and it was around 110degrees (larger angles of the trapezium).


img_20161210_094724 img_20161210_094729

Enough of math though, what we got done on Tuesday 29th of November was to get the 2m beams to stand on their own! As we were two people team, we took turns equally to drill holes in the wood and screw screws in place, so documenting the process was not that easy, but we did manage to get some pictures and videos. I never thought that I would actually cut wood, make holes and use all of the equipments we made use of to create something, to make something of my own with my own hands. It makes me realise how much work it takes to build things, to make a chair or a table or anything of the sort. It takes a lot of precision as well, and accurate measurements to get all the dimensions required perfect. I really enjoyed the process we went through and feel confident of handling materials now.

Take aways:

  • Always have a pencil on you for measurements
  • When drilling a hole, keep the drill stable and straight
  • Keep the drill on your side and do not have your face over it when drilling
  • Always make sure your clamp is in place and tight

The pieces of wood that we would attach together:


Clamping wood onto the table surface:


Making a line for drilling reference:


Repeated process with more beams and drilling:

img_2380 img_2381

Video of me drilling a hole in the wood:



Interactive Installation

Jia Rong


Final Project Documentation

Have you ever imagine yourself rocking the runway like a star? Do you want the lights spotting on you when you make each step? Do you love posing in front of the camera? To provide such an experience, Our group (Kevin, Saphya, Jia) produced an technologically interactive installation—Catwalk.

Catwalk is a final project for the course Interactive Installation in Fall semester, 2016 at NYU Shanghai. It is a catwalk that when audience step on it, the lights attached on the whiteboard flaked by the wall will turn up one by one according to their steps and the camera set at the end of the walk will shoot pictures of audience at their last step.

The brief for this project is to create a temporary and portable interactive installation that employs technology and it will be located in NYU Shanghai. Taking the space and possibly employed technology into consideration, we designed the catwalk as following It is a 3-meters*1-meter wooden rectangle covered by red felting carpet.The total area is divided into 6 parts using wood and foam. Conductive tapes were attached on the wood in each of these area as well as  cupboards beneath the carpet. When audience step on the catwalk, these two components will connect together, serving as  a push button, and trigger the light through arduino.

We were finally assigned a corner of a classroom for the project. After measuring the size and considering the space between our project with other group’s project, we set the catwalk against the wall closely. Two rolling whiteboards were employed then to close off the inside. We hang up the black felt material on the wall as well as the whiteboards. After turning off the room lights, we managed to create a relatively dark and private area for the catwalk, where the string of 3 LED lights can be bright enough to create the sense of spotlight.At the end of the walk, an emitter sensor is attached on the whiteboard. The audience will stand on the last area that will trigger the sensor and a real camera will take a picture of the audience. We also plan to draw a paparazzi on the black wallpaper behind the camera to offer a more immersive experience for the audience.

On the last day of class, several IMA professors were invited to our class and interacted with our project. Professor Matt was the first one and he was a bit shock when he walked to the end and the camera made the flash and took the picture. He wowed and jumped a bit  backwards. This was an unexpected reaction in that our group were to familiar with the project and didn’t think this dark environment actually created a sense of horror. When he walked back, he tripped down a bit and this made us realize the necessity to clean up the carpet and clarify the edge of catwalk. However, after this surprising beginning, other participants had a lot of fun walking the runway and taking pictures in various postures. The dark environment seems to give audience more sense of safety and privacy that allows them to immerse in the environment and enjoy the runway.

One of the professors also brought up the discussion about the context the runway setts up. The camera and paparazzi, along with the pictures of unprepared audience create a moment of high comedy; whereas the read carpet is supposed to create a funky moment. The mixture of these two moments is not what we planned at the beginning but is a really pleasant surprise.

The most difficult part for this final project is the technology — to code the whole program in ardunio and processing. One of the group member Kevin firstly went to professor Eric to figure out how can make and sensor  and the camera  work. Saphya consulted other IMA fellows and come up with the code for one push button work. Kevin and I then combined two part of the code together which can work for 5 lights and one camera. We then designed the circuit and wire connections on whiteboard. After this code worked, Saphya then added in the code for sound effect and simplify the whole codes.

Because of the time limit, we didn’t manage to paint the paparazzi and make sound effect work perfectly. Moving forward, we decided to modify the defects and add in more sound effect for the IMA final show. Moreover, since we are expecting a much larger size of audience to come to the show and interact with the project we will clean up the safety issue and take some measures to protect out laptop, circuit and ardunio.

I wish this installation can give audience an delightful experience and attract more people via word-of-mouth.

Final Project Documentation || On The Catwalk || Kevin Pham

The installation On The Catwalk, produced by Jia Rong, Kevin Pham, and Saphya Council recently opened up at New York University Shanghai as a part a series of works developed by members of Interactive Installation, taught by Eric Hagan.

On The Catwalk is a project inspired by a multitude of interests that pertain to each group member. The development of the piece came through the collaboration and the agreements made. Saphya wanted a piece that involved any viewers and allowed them to interact with the project. Jia wanted to have some sort of projection mapping or anything that would invoke feelings of being in a different environment from any viewers of the piece. Kevin wanted to involve some sort of motion tracking/sensing, that would allow a piece to not function when standing alone, but would turn on when a viewer interacts with it. After thorough discussion amongst the members, it was boiled down to an idea that the piece should create an environment on its own that would allow for users to trigger it through physical involvement. A suggestion was made that perhaps the project could be that of a bridge that lit up when users walked on it and the idea was adapted because of the information given that the piece had to be easily portable. So the idea changed to that of a walkway that lit up when people walked on it. The idea of lights flashing wherever and individual was gave way to the inspiration that the piece could represent that of a model runway. The lights would represent lights from paparazzi camera flashes. An environment would be crafted by closing off the piece, segregating it from the rest of the projects via whiteboards and walls. Black fabric would be attached to the whiteboard and wall in order to better simulate the walkway environment. This would all be topped off with a real camera that would take a picture at the end of the walkway.

So the goal of the piece is to simulate an experience that would give users the feeling of what walking down a runway would be like. The entire space is closed off and darkened because when performing on a stage or a walkway, due to a dimmed environment, performers are not meant to be able to see anyone in the audience. What is more likely is that they will see lights from camera flashes. So On The Catwalk taps into this sensation by creating that dark environment. When the user walks down the walkway, each step they take has LED’s that flash, along with camera sounds that directly correlate to the user’s position on the board. That is all the user experiences and gets to see until the end of the walkway is reached. Once the end is reached by a user, a real life camera is sent a signal that allows for it to trigger and a flash photo is captured. Prolonged occupation of that space by a user results in the camera continuing to take photos.

In order for the installation to work at its best, the location and equipment were instrumental. In order to create a piece that could be closed off and separate from the rest of the projects, the location was important. It needed to be a place where two walls existed, ideally the corner of a room. Whiteboards would be used to create a third wall, thus closing off the piece and forcing users to enter from one area like we would like. As for the equipment, the Arduino and breadboard were needed in order to make the LED’s work. The walkway, constructed in the woodshop, was sectioned off into 6 pieces. Each piece had its own connection to the LED’s or the camera. In order to make the connection between the walkway and the LED’s/camera work, there needed to be a way to create that connection on the walkway. The walkway was first lined with copper tape, subsequently a layer of styrofoam was placed on top that was only placed on the edges and to separate the sections. So at this point, the board had six sections that had Styrofoam borders and gaps in the middle that held copper tape. This was done so cardboard that was also lined with copper tape could be placed on top of the Styrofoam. This was done so the Styrofoam could effectively create a barrier between the pieces of copper tape. When the copper tape touches each other, since the walkway copper tape and cardboard copper tape are effectively separate, the connection is made, enabling LED’s/camera to trigger.

As for the six sections, five of them were dedicated to the LED’s. Due to limitations of the Arduino, each section could only hold up to three LED’s, giving a total of 15 LED’s. The final section at the end of the walkway, held the trigger to the real camera flash and capture. The camera is triggered by an infrared emitter that shoots out infrared signals, which are then picked up by the Canon 70D’s infrared sensor. A delay was put into the code so that if a user stays on the camera fire section, every two seconds, the camera would fire again, simulating the model experience.


  • November 28th – in class ALL

o   Cut wood, styrofoam

o   Drilled wood together

  • November 29th – Kevin + Saphya

o   Drilled top board on

o   Put down copper tape

o   Put on styrofoam

  • December 1st – ALL

o   Copper tape put on cardboard

o   Cut up cardboard

o   Solder wires to catwalk

o   Tried to work on code

  • December 2nd – Jia + Kevin

o   Moved supplies

o   Cut red fabric

o   Worked with Eric for camera + LED code

  • December 3rd – Jia + Kevin

o   Rework code

o   Cut up black fabric

o   Put fabric up on wall

o   Test aluminum foil conductivity

o   Rewired breadboard and arduino

  • December 4th – ALL

o   Redo wiring with resistors to get pressure sensors and LED’s working

o   Strip wires

o   Lay LED’s on cardboard

  • December 5th – ALL

o   Add in camera firing action

o   Fix red carpet to catwalk



  1. Build Process


15293450_10205889793522852_382143359_o 15311516_10205889792962838_2075501059_o


2) Pictures of users of project

img_5257 img_5258 img_5277 img_5285












Arduino Code:

Processing Code:

Sarabi: I^2 Week 11- Installing Projection Mapping

During Class, we were asked to experiment with Mad Mapper. No one in my group (Nicole and Jingyi) had done projection mapping before. To get a feel for it, we decided to create a room full of images. Though we were only projecting onto a piece of styrofoam, the idea was that if the images were projected into a room, the participant would be completely surrounded by the images.

Using Mad Mapper was simple enough, but we ran into one critical issue: Mad Mapper doesn’t support the importing of more than one video at a time. In order to get more than one video working, they first needs to be cut together in a video editing program. While editing the videos together would be easy enough, they’d also need to be arranged in the frame in such a way that they don’t get distorted in Mad Mapper. Seeing as this was an in-class project. We decided to simplify our approach and use the same image on all sides of the cube. Mapping went smoothly after that. We first tested our cube with an image of our professor, then we switched it for a video of Nicole and her friend dancing. The result was quite amusing.

eric nicole