Documentation for Final Project “Kelp me!”

Documentation for Final Project 

Documented by: Kaley Arnof

Name of the project: Kelp me!

This project pushed us to create a game of interaction, the final test of our knowledge and newfound acquired skills. We decided to test ourselves by creating a game that forces the users to interact both with each other and their controllers to reach a mutual goal. The interactions between players should be collaborative in nature, not competitive; the controllers should be used in a distinct, unique, and intuitive way, all the while catalyzing the interaction between the user and the computer. We wanted our game to be accessible to a range of gamers ranging in age. The game should also people without prior gaming activity or knowledge of our issue to play with relative ease.

The game itself is an ice-themed maze in which players jump onto platforms and over obstacles which, when done correctly, results in their character getting to safety. Only when both players reach the goal would either feel the joy of victory. Our premise for the game, penguins in the Antarctic circle, was inspired from our desire to gamify a crucial societal problem, pollution, and to spread awareness and promote discussion of these issues.

Goal: create a game that the user can understand how to play instinctively. 

 

The Research behind Kelp Me!

Before embarking on our journey, we first needed to figure out our destination. Before I jump into our specific project, I want to take a moment to talk more generally about research-based work. I greatly appreciate this course’s emphasis on the research process. Over my few brief years, I’ve come to believe that the meta-cognitive period is the most important step in any large task. Finding the right inspiration can form quite beautiful waves of creation, which in turn can inspire others. Communal building dates back to ancient philosophers and engineers, and I want to take the time to recognize IMA’s adoption of this tradition.

Research and Development for the Controller

http://graphics.cs.cmu.edu/projects/Avatar/avatar.pdf

When bouncing around a number of different ideas, Anna brought up a paper she read about interactive control of ‘avatars’ within a game. Through various sensors placed all around the player, the user could navigate an entire virtual world in real time. We both loved this concept, but knew that we lacked the knowledge to make a VR game. This brought us to a scaled down version of this idea—an isolated sensor which controlled a set of movements. Despite the restriction, we wanted the player to have as much mobility as possible. This led us to our designated area of the body: the feet.

 

This triggered memories of games we’ve played in basements and arcade games, the infamous “Dance Dance Revolution” and “Wii” board.

 

 

Neither one of these games captured our idea in the free form we wanted. Our idea involved the player being able to move their legs quickly but without any “cheating” by the player (as seen in many wii games). Our first idea was pretty similar. We brainstormed creating an interactive board on which the players signal their penguin to run to the right by leaning to the right, and vice versa. Jumping would also get the penguin to jump.This style of console works well in a game such as dance dance revolution, but doesn’t translate as well into our game. For our game, we imagined it could be too difficult with not enough payoff. The player could easily slip off the board, lose the game and feel unnecessarily frustrated. This meant it was time for us to diverge our background and imagine something new.

We don’t have one foot, we have two… what would it be like to have two separate controllers, instead of one? Walking would make the penguin walk, jumping would jump; the virtual world would mirror the real. This is much closer to what we wanted to make, and we were both quite excited at our realization.

Initially, we thought of using pressure sensors on our controller. Bust just like the midterm project, we simply needed digital input, analog was no use. Additionally, body weight is a difficult thing to control and standardize when talking about putting full body weight onto our sensor. Therefore, some sort of button made much more sense and would solve this problem entirely.

Once we settled on using digital buttons, not pressure analog sensors, we needed to actually create the controllers. Surprisingly, the construction didn’t need as much troubleshooting as we predicted. Using the same technique as the button from the midterm, we cut oval shaped pieces of cardboard that fit the dimensions of any foot. We used two distinct pieces for each “single” controller and attached conductive tape to one of their sides, soldering on long strands of wire on each of the side with the conductive tap, one to be connected to a digital pin and the other to ground. We cut out the same shape of some styrofoam/fabric-like material to separate the two pieces from constantly touching one another. In the center we cut two holes that would allow the cardboard pieces to touch when pressed together. When these two pieces of conductive tape would touch, with help from the Arduino, they create a close circuit, operating as simple buttons. And with that, we had made our penguin feet controllers!

This was the design we brought into user testing. This is what the controllers and the interaction looked like at the time:

user-testing-for-final-

The user testing, as always, was incredibly useful in the editing/improving/streamlining process. People seemed generally excited to test our project, and often surprised to discover how the controls work. Some people found the controllers a bit difficult to use at first, which can be easily attributed, not to the players, but to us for preparing a far too difficult level 1. One of our main questions we asked was whether or not players preferred the controllers on the ground or connected to their feet. Out of the thirteen people we asked, eleven thought attaching the consoles to their feet would improve the overall experience. We also asked if our user testers preferred jumping or sliding movements. The response was overwhelmingly pro-sliding.

Taking the feedback to heart, we added straps and changed the controls from jumping to sliding, which also meant that the player would have to jump in real life in order to make the penguin jump. For the straps, we used velcro, which allowed the player to adjust the strap to their feet, ensuring it stayed on when playing. This method also meant that the controllers could come on and off easily and could be worn without the players having to take their shoes off. In addition to the changes from our feedback, we also added a few changes of our own, namely, we covered the controller with black fabric, both for aesthetic and practical reasons. Our vision also included adding toes to the controllers so that they more clearly resembled penguin feet, but due to the time constraints we kept them as they were.

 

Reflecting on the post-user testing version of the console, I am both proud and critical. I’m proud of the functionality; these feet accomplish what we required of them. At the same time, I wish we could have more time to create a long-lasting, wearable, and aesthetic version of our consoles. One thing that became clear after user testing was the fact that cardboard, as a material, gets tired quite quickly. If I were to redesign the feet, I would recommend using cork, foam, or plastic instead. Additionally, the feet should not have cords attached to them. Not only do they get extremely tangled, but they ruin the illusion of reality by drastically limiting the mobility drastically. In theory, the console could create a line of different animal feet that correspond with different releases of the game. As critical as I am, I must remind myself that my nit-picky nature stems from my passion for the project idea and desire to make it as amazing as possible.

Research and Development for the Interface

Without a doubt, the game we envisioned came from games we’ve enjoyed in our past. Our initial idea for the game was something that vaguely involved penguins getting through real-life obstacles to get to safety. This led us to a game in which a penguin jumps and slides from melting pieces of ice trying to get to shore. But this idea brought up a problem: how would the two players work together? One solution to this problem was to give the two players different jobs that together would make one working penguin. After debating between two avatars controlled by two players versus one avatar controlled by two players, one avatar was pushed off the table, namely since having two players work on one avatar could lead to enormous frustration from one or both players. At this point, we turned to our research. We liked the look and feel of the moving screen, such as Temple Run,

*****TEMPLE RUN*****

but didn’t know how to incorporate two players into that format. Could we split the screen? No, that loses the interaction between players that we want. We could stop the screen unless both players are on the screen, or let the screen moving become the mechanism that gets players out (ie if one of the players falls too far behind, both lose). But both of these ideas felt unnecessarily difficult when compared with another format: static screen.

Fireboy and Watergirl was an inspiration from the beginning, since both Anna and I had fond memories playing this game and wanting our game to evoke the same feeling; using their static screen felt right and fit much more smoothly with our game. The next step  turned away from our old friend Arduino to our new, a bit scarier friend Processing. Luckily, some open source code for a similar game was available.

 

Reading through (and later manipulating) the code brought me to a whole new level of coding. Although I knew about each of the techniques used (arrays, voids, blocks, etc), seeing someone else use these tools in a new way taught me more than I ever could have deduced on my own. One of my most proud aspects of this entire project is truly understanding, line by line, what my code is accomplishing.  Although that might not sound impressive to someone else, personally this felt like a major breakthrough.

Paralleling the console process, after understanding what came before, it was time to forge our own path. The first step was the conception of a (far too difficult) layout for the game. The level we made was specially designed to involve both players in order to complete the challenges. Converting this layout to processing was surprisingly simply, yet time consuming.

Before user testing, we had two major additions we needed to add that the initial code did not have. The first major change comes from the singular vs. two player experience. For our game, we needed three different dead zones. Although I spent way too much time trying to figure out how to accomplish this, the solution was simply to indicate in the array which number (0, 1, 2, 3) applied to which player, and then coordinate this with the building blocks. The second task, making moving boxes, was not completed before user testing. Although we didn’t have time to finish this part, one way of doing this is to create another “player” using void, then make a for loop with moved this “player” one unit over when pressed by the other player. We could also have made the sides of the box dead zones to clarify which player needed to move which box.

In addition to the feedback about the console, we also received very useful information about the interface during user testing. The responds, though different in wording, all echoed the same message: the interface is ugly and unclear. One user said outright “make it more beautiful.” What this translated to for our project was to make the interface relevant to our game concept. We needed snow, ice, oil, and penguins! As for the clarity, this request called for a starting, winning and losing screens. Again, due to time restrains, we didn’t add in the starting screen, sticking only to the winning and losing screens.

Reflecting on the project, and this semester, the idea of interaction has taken on a whole new level of significance. I initially defined interaction as a conversation between two or more subjects in which one or more subjects send and receive a signal (input, starts convo, etc.). While I stand by my initial definition of interaction, I would revise this definition to emphasis that the exchange is, in essence, two-sided. Additionally, I would like to find a way of adding the word “meaningful” into the phrasing; interaction has purpose.

(I’m having a lot of trouble uploading a video of the project due to the wifi strength, my apologies. I will share the link via google drive.)

Recitation 9: Serial Communication (Leon)

Recitation 9: Serial Communication

Date: 12/16/2018

Instructor: Leon & Yang

In today’s recitation, we were asked to use Arduino to send two values to Processing via serial communication. To build a sketch that works like Etch A Sketch.

For this exercise, I built a circuit with two potentiometers first. And wrote a very simple code in Arduino to read the two analog values from potentiometers. My code in Arduino looks like this:

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

Serial.print(sensor1);
Serial.print(“,”);
Serial.print(sensor2);
Serial.println();

 

}

Then I built a sketch in Processing. Firstly I imported the values from Arduino. And then I created the ellipses which change positions according to the values from Arduino. My code looks like this:

 

import processing.serial.*;

String myString = null;
Serial myPort;

 

int Val = 2;
int[] sensorValues;
int[] oldValues;

 

void setup() {
size(1000, 1000);
background(0);
noStroke();
frameRate(30);
setupSerial();
}

 

void draw() {
updateSerial();
printArray(sensorValues);

 

 

fill(255);
ellipse(sensorValues[0],sensorValues[1],10,10);

//
}

 

void setupSerial() {
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 7 ], 9600);

myPort.clear();

myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
myString = null;

sensorValues = new int[Val];
}

 

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), “,”);
if (serialInArray.length ==Val) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

 

And the sketch works like this:

 

I didn’t get to the second exercise.

Recitation 10 Documentation: Media Controller (Leon)

Below are codes for Processing:

import processing.serial.*;

String myString = null;
Serial myPort;

int NUM_OF_VALUES = 2;
int[] sensorValues;

PImage img;

void setup() {
size(400, 600);
noStroke();
background(0);
img = loadImage(“Unknown.jpg”);
setupSerial();

}

void draw() {
for (int i=0; i<100; i++) {
//int size = int( random(1, 20) );
int size = int(map(sensorValues[0], 0, 1023, 1, 20));
int x = int( random(img.width) );
int y = int( random(img.height) );
color c = img.get(x, y);
fill(c);
ellipse(x, y, size, size);
}

updateSerial();
printArray(sensorValues);

}

void mousePressed() {
saveFrame(“Unknown.png”);

}

void setupSerial() {
printArray(Serial.list());
myPort = new Serial(this, Serial.list()[ 14 ], 9600);

myPort.clear();
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
myString = null;

sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), “,”);
if (serialInArray.length == NUM_OF_VALUES) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);
}
}
}
}
}

 

Below are the codes for Arduino:

void setup() {
Serial.begin(9600);
}

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

Serial.print(sensor1);
Serial.print(“,”);
Serial.print(sensor2);
Serial.println();

delay(100);
}

For this recitation exercise, I chose to alter an image found on the internet with two potentiometers. I think the hardest part of this exercise is the Processing part. For the Processing part, using array and integers were pretty difficult and hard to understand. However, the Arduino and the circuit was pretty simple. The circuit did not require a lot of wiring and the Arduino coding only needed serial print and connecting it to the right port. Overall, I thought this exercise was an interesting way to changes images and create a different type of art.

Recitation 10: Making a Media Controller-Rahmon Chapoteau (Leon)

For this recitation, I wanted to make a controller that would control the size of the circles in the processing sketch, which would make the live video look either more or less pixelated. The first thing I tired to do was make and control the color and placement of rectangles since I still did not really understand serial communication between Arduino and Processing:

 

After I had a better understanding of this, I got a lot of help from the fellows on how to fill the screen with the circles/pixels, and multiply them as I moved my potentiometer. Although I had trouble understanding how to fill the screen with circles based on how much the potentiometer moved, I did start to have a better understanding of the serial communication between Arduino and Processing. Here is the final result of my project:

Processing 

import processing.video.*;
Capture cam;

int sizeX = 10;

int sizeY = 10;

import processing.serial.*;


Serial myPort;
int valueFromArduino;

void setup() {
  size(640, 480);
  cam = new Capture(this, 640, 480);
  cam.start();
  myPort = new Serial(this, Serial.list()[ 3 ], 9600);
}

void draw() {
   while ( myPort.available() > 0) {
    valueFromArduino = myPort.read();
    println(valueFromArduino);
  }
  
  if (cam.available()) {
    cam.read();
    //can load pixels of camera input
    //just like how we load pixels of an image
    cam.loadPixels();

    int sizeArduino = int(map(valueFromArduino, 0, 255, 5, 20));
    int w = cam.width;
    int h = cam.height;
    for (int y = 0; y < h; y +=sizeArduino) {
      for (int x = 0; x < w; x+=sizeArduino) {


        int i =  x + y*w; // *** IMPORTANT ***

        float r =  red(cam.pixels[i]); 
        float g =  green(cam.pixels[i]);
        float b = blue(cam.pixels[i]);
        float brightness = map(mouseX, 0, width, 0, 255);
        //cam.pixels[i] = color(r+brightness, g+brightness, b+brightness); 

        fill(r, g, b);
        ellipse(x, y, sizeArduino, sizeArduino);


        //include size variable. 
        //if mouseX > ..., decrease size 


        //if ((mouseX <160)) {
        //  sizeX = 5;
        //  sizeY = 5;
        //} else if ((mouseX > 160) && (mouseX <320)) {

        //  sizeX = 10;
        //  sizeY = 10;
        //  //ellipse(x, y, sizeX, sizeY);
        //} else if ((mouseX > 160) && (mouseX <320)) {

        //  sizeX = 10;
        //  sizeY = 10;
        //  //ellipse(x, y, sizeX, sizeY);
        //} else if ((mouseX >320) && (mouseX <480)) {

        //  sizeX = 15;
        //  sizeY = 15;
        //} else if ((mouseX >480) && (mouseX <640)) {
        //  sizeX = 20;
        //  sizeY = 20;
        //}



        //1023 highest for potentiometer, can use map
      }
    }
    cam.updatePixels();
  }
}

//void captureEvent(Capture cam) {
//  cam.read();
//}

Interaction Lab Final Project Coneys (Cossovich)

My final project was an arcade style asteroid dodger called Space Cadet, that focused on bridging the gap between retro arcade games and new forms of interaction.

First we built the physical components of the project. Essentially the steering wheel and gas pedal are a glorified button and potentiometer. For the steering wheel we first cut out a Styrofoam core, sanded it down to a good shape and the paper mached over the top of it. We then painted it to give it a nice finished look before hot glue gunning it to a 100k potentiometer.  Ori made almost all of our graphics in processing using an object oriented based approach that allowed us to generate the objects as we saw fit. By continually looping the objects back to the top of the screen after they passed the bottom she created a scrolling affect that was perfect for our purposes.  We then added collision detection using 5 points along the edge of the rocket and then checking if any of the points had a distance to the center of any planet or asteroid less than the circumference of that object. If so a collision was then detected. This had a large impact on our frame rate so we settled on five points as the optimum number for collision detection, one on the nose two on either side in the middle and one on each side on the wings. Finally we added some the mechanics that are reliant upon collisions namely the lives and score. And a small invulnerability window after a collision so that only one life was lost each time a collision happened. We then for added immersion added the camera view of the player to the window of the rocket ship. This was just a pretty simple crop and resize of the camera input that was manually adjusted using a pixel offset from the rocket until the player was inside the window and the edges of the capture were hidden under the rocket. Finally some sound effects and background music were added to complete the retro feel and a box was created to enclose the entire project and provide solid mounting for the steering wheel. However our measurements were slightly off and we were unable to fully house everything correctly.

Link to source code:

https://drive.google.com/open?id=1ixNlQdIcZVAJi5Lowk4xFE6AwNyhae68

Link to documentation videos:

https://drive.google.com/open?id=1KYy98kuLxi_nwviZn6yBVd0KOOyojnCM

https://drive.google.com/open?id=1zFyEf-QBgFYodQcpfLL4RAs024uJbpRk

https://drive.google.com/open?id=1SY7fHsiWxi-ToXm9kfQ0kAnYEycsNm2e

Interaction Lab Final Project Proposal Coneys (Cossovich)

Our final project, Space Cadet is an arcade style asteroid dodging video game. Our project intends to bridge the gap between the arcade experience and the immersion that new technology can provide. Building upon the foundation of traditional arcade games we will heighten the immersion using new technology but also make it easier and more approachable to obtain than standard arcade games. For the foundation we will build a traditional arcade setup with a steering wheel and gas pedal much like many racing games. Having the “vehicle” be there is both satisfying physically and deeply immersive. Keeping in style we’ll use retro graphics and sound and focus on the fundamentals of game play. The player will control a rocket that must dodge asteroids and collects planets to increase their score. We want to increase the immersion but when controlling a rocket and not a person it can be hard to really imagine yourself in the game. But what if the player didn’t have to imagine? To this end, we will take the players image from the camera and place it inside the window of the rocket ship. We suspect that being able to see yourself in the game will heighten immersion and increase the connection you feel. Ultimately, the goal of our project is to create something that is fun to play. We were heavily influenced by a variety of games. A lot of our design comes from arcade games because we really appreciate the how much more satisfying it feels to drive a car or hit something than to move sticks and buttons on a controller. Arcade games are also great in the sense that they are very intuitive, you can look at it and almost immediately know the objective making them very approachable for all ages and all walks of life. From other types of video games we also realized that the most captivating ones are the ones that are the most immersive, thus our decision to place the player in the game and our intent to try and focus on the immersion aspects. We hope that our project shows how in IMA old dogs can learn new tricks. That in many cases existing models for games can be tweaked and improved with new technology and forms of interaction for better results.

Interaction Lab Recitation 9 Coneys (Cossovich)

The ellipse etch a sketch was pretty straightforward I just used the code provided for arduino modified to send the readings of the two potentiometers. Then inserted those into an ellipse draw in processing like so:

Void draw() {
updateSerial();
printArray(sensorValues);
background(0);
ellipse(512,512,sensorValues[0],sensorValues[1]);
}

 

The instrument was a little less straightforward. Producing a changing sound wasn’t hard at all, in processing you simply pass arduino the x and y locations of the mouse. However getting good clear delineations between notes played was extremely difficult. I played around a lot with delays and no tones but nothing seemed to work well. If you vary the duration the note plays for they tend to overlap producing a continuous tone if there is no delay. If you have a delay though the duration of the notes still doesn’t sound quite right as the length of the delay changes the perception of the note. Varying them both together doesn’t work either as notes run together on the high end still and are spaced longer between longer notes making it feel like an exponential scaling not a linear one. I experienced similar issues with the basic arduino sound libraries previously in my midterm project. Eventually I went with something along the lines of

tone(buzzer_pin,y_pos,x_pos);

delay(100);

for each note.