Documentation for Final Project “Kelp me!”

Documentation for Final Project 

Documented by: Kaley Arnof

Name of the project: Kelp me!

This project pushed us to create a game of interaction, the final test of our knowledge and newfound acquired skills. We decided to test ourselves by creating a game that forces the users to interact both with each other and their controllers to reach a mutual goal. The interactions between players should be collaborative in nature, not competitive; the controllers should be used in a distinct, unique, and intuitive way, all the while catalyzing the interaction between the user and the computer. We wanted our game to be accessible to a range of gamers ranging in age. The game should also people without prior gaming activity or knowledge of our issue to play with relative ease.

The game itself is an ice-themed maze in which players jump onto platforms and over obstacles which, when done correctly, results in their character getting to safety. Only when both players reach the goal would either feel the joy of victory. Our premise for the game, penguins in the Antarctic circle, was inspired from our desire to gamify a crucial societal problem, pollution, and to spread awareness and promote discussion of these issues.

Goal: create a game that the user can understand how to play instinctively. 

 

The Research behind Kelp Me!

Before embarking on our journey, we first needed to figure out our destination. Before I jump into our specific project, I want to take a moment to talk more generally about research-based work. I greatly appreciate this course’s emphasis on the research process. Over my few brief years, I’ve come to believe that the meta-cognitive period is the most important step in any large task. Finding the right inspiration can form quite beautiful waves of creation, which in turn can inspire others. Communal building dates back to ancient philosophers and engineers, and I want to take the time to recognize IMA’s adoption of this tradition.

Research and Development for the Controller

http://graphics.cs.cmu.edu/projects/Avatar/avatar.pdf

When bouncing around a number of different ideas, Anna brought up a paper she read about interactive control of ‘avatars’ within a game. Through various sensors placed all around the player, the user could navigate an entire virtual world in real time. We both loved this concept, but knew that we lacked the knowledge to make a VR game. This brought us to a scaled down version of this idea—an isolated sensor which controlled a set of movements. Despite the restriction, we wanted the player to have as much mobility as possible. This led us to our designated area of the body: the feet.

 

This triggered memories of games we’ve played in basements and arcade games, the infamous “Dance Dance Revolution” and “Wii” board.

 

 

Neither one of these games captured our idea in the free form we wanted. Our idea involved the player being able to move their legs quickly but without any “cheating” by the player (as seen in many wii games). Our first idea was pretty similar. We brainstormed creating an interactive board on which the players signal their penguin to run to the right by leaning to the right, and vice versa. Jumping would also get the penguin to jump.This style of console works well in a game such as dance dance revolution, but doesn’t translate as well into our game. For our game, we imagined it could be too difficult with not enough payoff. The player could easily slip off the board, lose the game and feel unnecessarily frustrated. This meant it was time for us to diverge our background and imagine something new.

We don’t have one foot, we have two… what would it be like to have two separate controllers, instead of one? Walking would make the penguin walk, jumping would jump; the virtual world would mirror the real. This is much closer to what we wanted to make, and we were both quite excited at our realization.

Initially, we thought of using pressure sensors on our controller. Bust just like the midterm project, we simply needed digital input, analog was no use. Additionally, body weight is a difficult thing to control and standardize when talking about putting full body weight onto our sensor. Therefore, some sort of button made much more sense and would solve this problem entirely.

Once we settled on using digital buttons, not pressure analog sensors, we needed to actually create the controllers. Surprisingly, the construction didn’t need as much troubleshooting as we predicted. Using the same technique as the button from the midterm, we cut oval shaped pieces of cardboard that fit the dimensions of any foot. We used two distinct pieces for each “single” controller and attached conductive tape to one of their sides, soldering on long strands of wire on each of the side with the conductive tap, one to be connected to a digital pin and the other to ground. We cut out the same shape of some styrofoam/fabric-like material to separate the two pieces from constantly touching one another. In the center we cut two holes that would allow the cardboard pieces to touch when pressed together. When these two pieces of conductive tape would touch, with help from the Arduino, they create a close circuit, operating as simple buttons. And with that, we had made our penguin feet controllers!

This was the design we brought into user testing. This is what the controllers and the interaction looked like at the time:

user-testing-for-final-

The user testing, as always, was incredibly useful in the editing/improving/streamlining process. People seemed generally excited to test our project, and often surprised to discover how the controls work. Some people found the controllers a bit difficult to use at first, which can be easily attributed, not to the players, but to us for preparing a far too difficult level 1. One of our main questions we asked was whether or not players preferred the controllers on the ground or connected to their feet. Out of the thirteen people we asked, eleven thought attaching the consoles to their feet would improve the overall experience. We also asked if our user testers preferred jumping or sliding movements. The response was overwhelmingly pro-sliding.

Taking the feedback to heart, we added straps and changed the controls from jumping to sliding, which also meant that the player would have to jump in real life in order to make the penguin jump. For the straps, we used velcro, which allowed the player to adjust the strap to their feet, ensuring it stayed on when playing. This method also meant that the controllers could come on and off easily and could be worn without the players having to take their shoes off. In addition to the changes from our feedback, we also added a few changes of our own, namely, we covered the controller with black fabric, both for aesthetic and practical reasons. Our vision also included adding toes to the controllers so that they more clearly resembled penguin feet, but due to the time constraints we kept them as they were.

 

Reflecting on the post-user testing version of the console, I am both proud and critical. I’m proud of the functionality; these feet accomplish what we required of them. At the same time, I wish we could have more time to create a long-lasting, wearable, and aesthetic version of our consoles. One thing that became clear after user testing was the fact that cardboard, as a material, gets tired quite quickly. If I were to redesign the feet, I would recommend using cork, foam, or plastic instead. Additionally, the feet should not have cords attached to them. Not only do they get extremely tangled, but they ruin the illusion of reality by drastically limiting the mobility drastically. In theory, the console could create a line of different animal feet that correspond with different releases of the game. As critical as I am, I must remind myself that my nit-picky nature stems from my passion for the project idea and desire to make it as amazing as possible.

Research and Development for the Interface

Without a doubt, the game we envisioned came from games we’ve enjoyed in our past. Our initial idea for the game was something that vaguely involved penguins getting through real-life obstacles to get to safety. This led us to a game in which a penguin jumps and slides from melting pieces of ice trying to get to shore. But this idea brought up a problem: how would the two players work together? One solution to this problem was to give the two players different jobs that together would make one working penguin. After debating between two avatars controlled by two players versus one avatar controlled by two players, one avatar was pushed off the table, namely since having two players work on one avatar could lead to enormous frustration from one or both players. At this point, we turned to our research. We liked the look and feel of the moving screen, such as Temple Run,

*****TEMPLE RUN*****

but didn’t know how to incorporate two players into that format. Could we split the screen? No, that loses the interaction between players that we want. We could stop the screen unless both players are on the screen, or let the screen moving become the mechanism that gets players out (ie if one of the players falls too far behind, both lose). But both of these ideas felt unnecessarily difficult when compared with another format: static screen.

Fireboy and Watergirl was an inspiration from the beginning, since both Anna and I had fond memories playing this game and wanting our game to evoke the same feeling; using their static screen felt right and fit much more smoothly with our game. The next step  turned away from our old friend Arduino to our new, a bit scarier friend Processing. Luckily, some open source code for a similar game was available.

 

Reading through (and later manipulating) the code brought me to a whole new level of coding. Although I knew about each of the techniques used (arrays, voids, blocks, etc), seeing someone else use these tools in a new way taught me more than I ever could have deduced on my own. One of my most proud aspects of this entire project is truly understanding, line by line, what my code is accomplishing.  Although that might not sound impressive to someone else, personally this felt like a major breakthrough.

Paralleling the console process, after understanding what came before, it was time to forge our own path. The first step was the conception of a (far too difficult) layout for the game. The level we made was specially designed to involve both players in order to complete the challenges. Converting this layout to processing was surprisingly simply, yet time consuming.

Before user testing, we had two major additions we needed to add that the initial code did not have. The first major change comes from the singular vs. two player experience. For our game, we needed three different dead zones. Although I spent way too much time trying to figure out how to accomplish this, the solution was simply to indicate in the array which number (0, 1, 2, 3) applied to which player, and then coordinate this with the building blocks. The second task, making moving boxes, was not completed before user testing. Although we didn’t have time to finish this part, one way of doing this is to create another “player” using void, then make a for loop with moved this “player” one unit over when pressed by the other player. We could also have made the sides of the box dead zones to clarify which player needed to move which box.

In addition to the feedback about the console, we also received very useful information about the interface during user testing. The responds, though different in wording, all echoed the same message: the interface is ugly and unclear. One user said outright “make it more beautiful.” What this translated to for our project was to make the interface relevant to our game concept. We needed snow, ice, oil, and penguins! As for the clarity, this request called for a starting, winning and losing screens. Again, due to time restrains, we didn’t add in the starting screen, sticking only to the winning and losing screens.

Reflecting on the project, and this semester, the idea of interaction has taken on a whole new level of significance. I initially defined interaction as a conversation between two or more subjects in which one or more subjects send and receive a signal (input, starts convo, etc.). While I stand by my initial definition of interaction, I would revise this definition to emphasis that the exchange is, in essence, two-sided. Additionally, I would like to find a way of adding the word “meaningful” into the phrasing; interaction has purpose.

(I’m having a lot of trouble uploading a video of the project due to the wifi strength, my apologies. I will share the link via google drive.)

RAPS – BirthDeath

BirthDeath

Partners: Maxwell, Vivian

Project Abstract

A live audiovisual performance in collaboration with Maxwell and Vivian. This performance explores the limits of the human body.

Project Description

BirthDeath is about the limits of the human body, and pushing those limits even further, until the body collapses. Maxwell and I created a realtime audiovisual performance following this concept.

With this performance, our purpose is simply to gradually provoke an overwhelming feeling in the audience and even increase their heart rate as the overall rate and feel of our piece speeds up. This performance is not necessarily meant to make you reflect on anything, it is just a sensorial experience.

Our inspirations to create this piece came from our interest in working with dance performance and the body. Maxwell and I have both danced and we really wanted to explore with implementing the body into our audiovisual performance. We originally wanted to use a heartbeat monitor and an accelerometer to modify some values in our Max patch, but since this was too complicated for the little time we had, we decided not to use it. However, we kept the dance performance.

Perspective & Context

Our performances fits into the historical context of visual music and abstract film in the sense that we really wanted to create a correlation between the sound and the visuals although not all of the audio factors were modifying the visuals. Our communication during the performance was essential.

I think that nowadays, because of our constant need to maximize time and productivity, we push our bodies with the last bits of energy we have everyday. We forget that our bodies have a limit and act as though we are invincible.

Development & Technical Implementation

From the beginning, Maxwell and I had a clear idea that they were going to work on the audio and I would work on the visuals. Maxwell created the audio patch alone, but in my case, I found that working on my patch while Maxwell played around with the audio served as a guide for me to create the visuals.

Part of the inspiration for this piece came from Maxwell’s and my interest in using sensors in the performance. Thus, we also wanted to implement Arduino in our piece so that we could use sensors such as a heartbeat monitor and an accelerometer. We did research on different types of heartbeat monitors but the only ones available to us were not reliable at all and were quite complicated to use. We considered using different buying a nicer sensor, but still, we did not know how to use it and we did not have enough time to make it work. Thus, we decided to only have the accelerometer. With Eric’s help, we got a patch that could send Arduino data to Max, so getting the accelerometer values was not too hard. However, attaching the accelerometer to Maxwell with a bluetooth Arduino did not seem to be very reliable either, so we officially decided to leave our idea of using sensors.

Instead, since I still wanted to show a correlation between the heartbeat and the visuals, I made the amplitude of Maxwell’s audio determine the size of a 3D model of a heart, and the redness of the screen in the beginning of the piece. This is essentially what I wanted to use the sensors for anyways, but this was definitely a much better and faster way of going about it.

As to the rest of the visual components, I originally meant to generate all of the visuals in Max. However, since I do not really know how to create graphics in Max, I decided to use screen recordings of sketches that I had previously created in Processing. In the patch, I switched between 4 videos, one of a red background, another of the 3D model of the heart, and the two other ones for the videos of my Processing sketches. I used functions such as rotation, zoom, scramble, multiplier, etc. When it came to modifying the visuals live, the MidiMix was fundamental to the success of the piece. I cannot imagine having the same results without it. It really made all the values easier to access and to alter.

Overall, we had two different patches, one for the audio, and one for the visuals. This means that we used two different laptops in the performance, and they interacted through our own improvisation and through the amplitude of the sound.

Performance

The performance was the first time we all ran through the piece. It went way better than what we expected. I was terrified because I was not sure if it was going to go well and I did not know when the performance ended, so much of it was improvising on the spot and trying to make the visuals fit the sound and the dance.

In terms of what could go better, Maxwell kept walking in and out of stage to work on the audio, which we realized was not a great idea because it did not add anything to the piece and in fact was a bit distracting. So we decided to cut the dance off of the performance and only focus on the audio and visuals.

Even though Maxwell and I made the greatest contribution to the project, Vivian was very helpful during the performance because if we wanted to keep the dance, someone had to control the sound. So Vivian took that role.

Maxwell and I had the opportunity to performance in Miki’s show at Extra Time Cafe & Lounge. Here is a picture of us during the performance.

Conclusion

Overall I am very happy with how this project turned out. At first it seemed a bit chaotic because we did not really know in what direction to go. But we ended up figuring it out. Working with Maxwell was great, they did an amazing job with the sound, which really helped me develop my part of the project. And Vivian was very helpful during the performance because she was able to control the audio while Maxwell was dancing. I would have not been able to control the sound and the visuals at the same time. I really enjoyed doing this project and hope to create more live audiovisual performances in the future.

Being able to performance at Extra Time Cafe & Lounge was an amazing experience.

Recitation 9: Serial Communication (Leon)

Recitation 9: Serial Communication

Date: 12/16/2018

Instructor: Leon & Yang

In today’s recitation, we were asked to use Arduino to send two values to Processing via serial communication. To build a sketch that works like Etch A Sketch.

For this exercise, I built a circuit with two potentiometers first. And wrote a very simple code in Arduino to read the two analog values from potentiometers. My code in Arduino looks like this:

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

Serial.print(sensor1);
Serial.print(“,”);
Serial.print(sensor2);
Serial.println();

 

}

Then I built a sketch in Processing. Firstly I imported the values from Arduino. And then I created the ellipses which change positions according to the values from Arduino. My code looks like this:

 

import processing.serial.*;

String myString = null;
Serial myPort;

 

int Val = 2;
int[] sensorValues;
int[] oldValues;

 

void setup() {
size(1000, 1000);
background(0);
noStroke();
frameRate(30);
setupSerial();
}

 

void draw() {
updateSerial();
printArray(sensorValues);

 

 

fill(255);
ellipse(sensorValues[0],sensorValues[1],10,10);

//
}

 

void setupSerial() {
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 7 ], 9600);

myPort.clear();

myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
myString = null;

sensorValues = new int[Val];
}

 

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), “,”);
if (serialInArray.length ==Val) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

 

And the sketch works like this:

 

I didn’t get to the second exercise.

Main File.

Programming design:

Title: Interact to Expose.                                           Title: Programming the wild.

     

Title: Pilgrimage and brainstorming.                          Title: Warm smell of tea.

          

 

Film and Animation

《竹石——立根原在破岩中》(bamboo of the stone)

 

《编码与自然互动》

Design with hardware: 

Title: 3D model water wheel liquid trash can prototype

     

Title: under the night light

 

 

Recitation 10 Documentation: Media Controller (Leon)

Below are codes for Processing:

import processing.serial.*;

String myString = null;
Serial myPort;

int NUM_OF_VALUES = 2;
int[] sensorValues;

PImage img;

void setup() {
size(400, 600);
noStroke();
background(0);
img = loadImage(“Unknown.jpg”);
setupSerial();

}

void draw() {
for (int i=0; i<100; i++) {
//int size = int( random(1, 20) );
int size = int(map(sensorValues[0], 0, 1023, 1, 20));
int x = int( random(img.width) );
int y = int( random(img.height) );
color c = img.get(x, y);
fill(c);
ellipse(x, y, size, size);
}

updateSerial();
printArray(sensorValues);

}

void mousePressed() {
saveFrame(“Unknown.png”);

}

void setupSerial() {
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 14 ], 9600);

myPort.clear();
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
myString = null;

sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), “,”);
if (serialInArray.length == NUM_OF_VALUES) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

 

Below are the codes for Arduino:

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

Serial.print(sensor1);
Serial.print(“,”);
Serial.print(sensor2);
Serial.println();

delay(100);
}

For this recitation exercise, I chose to alter an image found on the internet with two potentiometers. I think the hardest part of this exercise is the Processing part. For the Processing part, using array and integers were pretty difficult and hard to understand. However, the Arduino and the circuit was pretty simple. The circuit did not require a lot of wiring and the Arduino coding only needed serial print and connecting it to the right port. Overall, I thought this exercise was an interesting way to changes images and create a different type of art.

Recitation 10: Making a Media Controller-Rahmon Chapoteau (Leon)

For this recitation, I wanted to make a controller that would control the size of the circles in the processing sketch, which would make the live video look either more or less pixelated. The first thing I tired to do was make and control the color and placement of rectangles since I still did not really understand serial communication between Arduino and Processing:

 

After I had a better understanding of this, I got a lot of help from the fellows on how to fill the screen with the circles/pixels, and multiply them as I moved my potentiometer. Although I had trouble understanding how to fill the screen with circles based on how much the potentiometer moved, I did start to have a better understanding of the serial communication between Arduino and Processing. Here is the final result of my project:

Processing 

import processing.video.*;
Capture cam;

int sizeX = 10;

int sizeY = 10;

import processing.serial.*;


Serial myPort;
int valueFromArduino;

void setup() {
  size(640, 480);
  cam = new Capture(this, 640, 480);
  cam.start();
  myPort = new Serial(this, Serial.list()[ 3 ], 9600);
}

void draw() {
   while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
    println(valueFromArduino);
  }
  
  if (cam.available()) {
    cam.read();
    //can load pixels of camera input
    //just like how we load pixels of an image
    cam.loadPixels();

    int sizeArduino = int(map(valueFromArduino, 0, 255, 5, 20));
    int w = cam.width;
    int h = cam.height;
    for (int y = 0; y < h; y +=sizeArduino) {
      for (int x = 0; x < w; x+=sizeArduino) {


        int i =  x + y*w; // *** IMPORTANT ***

        float r =  red(cam.pixels[i]); 
        float g =  green(cam.pixels[i]);
        float b = blue(cam.pixels[i]);
        float brightness = map(mouseX, 0, width, 0, 255);
        //cam.pixels[i] = color(r+brightness, g+brightness, b+brightness); 

        fill(r, g, b);
        ellipse(x, y, sizeArduino, sizeArduino);


        //include size variable. 
        //if mouseX > ..., decrease size 


        //if ((mouseX <160)) {
        //  sizeX = 5;
        //  sizeY = 5;
        //} else if ((mouseX > 160) && (mouseX <320)) {

        //  sizeX = 10;
        //  sizeY = 10;
        //  //ellipse(x, y, sizeX, sizeY);
        //} else if ((mouseX > 160) && (mouseX <320)) {

        //  sizeX = 10;
        //  sizeY = 10;
        //  //ellipse(x, y, sizeX, sizeY);
        //} else if ((mouseX >320) && (mouseX <480)) {

        //  sizeX = 15;
        //  sizeY = 15;
        //} else if ((mouseX >480) && (mouseX <640)) {
        //  sizeX = 20;
        //  sizeY = 20;
        //}



        //1023 highest for potentiometer, can use map
      }
    }
    cam.updatePixels();
  }
}

//void captureEvent(Capture cam) {
//  cam.read();
//}

Week 15: Internet Art project – Abdi (Chen)

Title: BRAND NEW WORLD

Linkhttp://imanas.shanghai.nyu.edu/~kap633/final-website/

Partner: Katie Pellegrino

Conception & Design:

In designing our internet art project, Katie and I wanted to mock/comment on our 21st century brand-obsessed consumer culture. We began by posing the question: “Why do people care about brands and logos?” Our inquiry lead us to hypothesize that it isn’t necessarily the products that people are pursuing. But rather the feelings that they derive from wearing visible branding. So we sought to tap into these base pursuits by creating an ironic webshop where we would sell consumers the feelings they’re after without needing to buy the product. (“Skip the product cop the feeling!”) We came up with the name ‘BRAND NEW WORLD‘, which we thought was a clever play on words on the dystopic social satire novel by Aldous Huxley, ‘BRAVE NEW WORLD’.

Process:

When building our project, Katie and I sat and discussed how we wanted this webshop to look. We were really fascinated by the aesthetics and sketchy look of early 2000s websites, and the nostalgia that they evoked. So we aimed to design our webshop to be simple and tacky in its visual design. We went with Times New Roman for the title and all the headers because that was the go-to font for many websites back then; and also a bright blue for the text color against a pale yellow background because we felt that looked pretty 2002-ish. We used an image of a web browser from back then to build our webshop within, but used Photoshop to adjust it and add functions that would work with and fit our site. Katie took the lead on the technical web construction. I took took the lead on the visual assets. I used Adobe Illustrator to distort brand logos into the underlying feelings that they might give to their consumers. (Champion > Cool, Rolls Royce > Really Rich). To give our site a more retro feel, we implemented GIFs that were blatantly consumer-centric (dollar signs, ‘I LOVE SHOPPING’, ETC.) . We initially were stuck about how we would convey our idea of buying the feelings while reflecting the actual brands. During user testing, we received feedback that our project wasn’t entirely clear that it was supposed to be a webshop, and were suggested to add pricing and a checkout process to better convey it. One night while working on our project, we had an epiphany that it would be cool to juxtapose the original logos against the twisted logos of the feelings by perfectly fading into the feeling when hovered over, and adding prices that were were significantly cheaper that the actual relative price ranges of the brands’ products. The prices of the feelings that we’re selling go significantly down to reflect the “cheap feelings” that buying things give us. When checking out, users click the cart button which redirects them to an article on the psychological effect of purchasing luxury goods.

Future:

Upon receiving feedback, we probably should have been a bit less obvious with the irony and allowed users to think it was a real webshop and, after exploring the site a bit, discover that “Oh! this is actually not a real webshop, but an art piece.” I also would have loved to expand our “product inventory”. We did want to add more than just 3 items for each category. But unfortunately, time did not allow for the creation of more visual assets. In the future, I imagine there to be at least 9 per category, enough for the user to scroll while browsing.

 

Overall, I am very satisfied with how our project turned out, and how well Katie and I were able to execute it. Our final product looked almost exactly how I imagined it to look in my head since day 1 and our initial rough sketch on graph paper,