Alicja’s Final Project: Fede, Pingu’s Friend

My presentation can be accessed here: https://docs.google.com/presentation/d/1AVvi2XhT0bkWxeg8-dboIzKY06iUXNZF640Hh83m_Gc/edit?usp=sharing

Since almost all of my assignments for this class shared a penguin theme, I decided to construct a penguin again. This time, however, I focused on the wing movement – I wanted to make it seem organic and to be able to express emotions through it.

Step 1. Design

I started my drafting process thinking I would create a figure that would be more furby-like in shape:

PINGWIN1 PINGWIN3

However, as I received feedback about these sketches, I quickly drew a new design that would emphasize the wings more, since they were supposed to be the main focus of my project:

DSC_1273

Step 2. Wing Mechanism

My idea first was to work with three servos mounted in parallel:

DSC_1274

I thought that, as each of the three parts of the wing could move separately, I would be able to control the wing movement extremely precisely with this structure. I started by hot-gluing the servos onto a plastic board and attaching metal plates to their arms. Joints were tightened with a rubber band:

DSC_1151DSC_1148

*Tip: To make the servos stick well to the board, I filled the holes on the bottom of the board with hot glue as well.

After I had this structure built, I plugged it into an external power supply and controlled it with my Arduino. While I was happy with the degree of control this set-up gave me, I quickly found that the metal plates were too rigid:

To fix this problem, my professor, Rudi, suggested using carbon fiber instead of metal plates. Following his advice, I built my second iteration, still using rubber bands as joints:

DSC_1157

This material worked much better as it came to quickly and smoothly moving the tip of the wing:

However, when I dressed this structure in fabric, I was disappointed to find that the material obstructed the movement:

DSC_1161

It was time, then, to come up with a new mechanism, one that would not involve the wings changing their thickness with movement. The professor, Rudi, came to the rescue once again and shared with me the concept of a fishing rod-like structure:

DSC_1163

The idea was that by pulling a string attached to the end of a thin carbon fiber stick, I would be able bend the wing, moving its tip upwards or downwards, depending on whether the string went up the top or the bottom of the carbon fiber: Video 1

This worked, but not perfectly, because a lot of force needed to be used to bend the central “bone.” I realized, however, that this can be fixed by adding one more point which stabilized the string: Video 2

DSC_1176 DSC_1194

I created these joints with Instamorph, and I also added a flat piece at the end of the carbon fiber so that I could mount the stick onto a servo arm easily.
*Tip: I used needles to form holes in the instamorph structure; once the plastic cooled down, I removed them and thread the string through them.

The next step was connecting the servos:

DSC_1169 DSC_1175

I attached carbon fiber to one servo, which was mounted onto a plastic board. Then, I added another board with two servos hot-glued to it. The arms of these servos were pulling one string each. I secured the two separate plates in place with screws and plastic building blocks, as can be seen in the photo. Now my structure looked like this:

DSC_1181

And worked like this: Video 3

While it definitely functioned, the action of string pulling seemed to be too demanding for the little servos, and they kept breaking. As a result, I had to switch to bigger ones:

DSC_1189 DSC_1193

These worked perfectly! Finally, I had my wing structure in place. The next step, of course, was to duplicate it and mount it on top of the penguin’s body.

*Tip: A lot of things kept malfunctioning in my project. One of the issues was an instable connection at the power supply plug. Once I soldered the wires and secured the connections with hot glue, this problem disappeared.

DSC_1218

This is the final design of my wing structure:

DSC_1275

Step 3. Structure

I was extremely lucky in this domain – my Professor already had a laser-cut oval shape that fit my design perfectly and he offered it for me to use. I only needed to secure the connection points by hot gluing them:

DSC_1212

Once I had two wings done, I connected them with little wire loops:

DSC_1203

Then, I added metal plates that connected the servos to a plastic board that served as the base:

DSC_1206DSC_1207

Once that was in place, I could hot-glue my wings on top of the laser-cut oval:

DSC_1209 DSC_1215

Here are the two wings working:

Now, I needed to add a horizontal dimension to the wings, so I attached little perpendicular carbon fiber “bones” to both wings:

DSC_1225 DSC_1221

As shown, I hot-glued them in place. Finally, I also constructed the top part of the penguin’s body, which was made from bent wire and secured with hot glue again:

DSC_1243 DSC_1244

Step 4. Skin

Now was the time to actually transform my electronic structure into a penguin. I started with sawing the wings:

DSC_1229 DSC_1250

I secured them in place with pins:

DSC_1280

Then, I moved on to constructing the cloth that was to cover the body. In order to achieve that I cut and sew two pieces of material together that enveloped the body. I also added a white patch in the front. The sides could be shut and opened with velcro.

Once I had the body, I could work on the head. I followed this tutorial. First, I cut five shapes like this:

DSC_1258 DSC_1259

Then, I sew them together at the joints and stuffed the inside:

DSC_1265 DSC_1270

All that I was missing now were the eyes and the beak. These I created with magic clay and felt correspondingly:

DSC_1278 DSC_1284

And here is Fede:

DSC_1292

Step 5. Programming

After I finished the hardware, I worked a little bit on the software as well. Since the idea was that Fede would express emotions through wings, I coded 6 actions: “Happy,” “Sad,” “Chill,” “Hi” “Hello” and “Fly.” While all of them worked, the last one seemed too instense and broke the strings frequently, so I avoided using it.

DSC_1287

*Tip: I used hot air gun to heat up the instamorph tip and replace the broken strings.

Here is my final demo:

A lot of people seemed to really like the wing movement, which made me happy. The concept of these actions expressing emotions though seemed confusing to many, so I should work on this issue next. Perhaps I could find, by trial and error and user testing, which action sequences are more meaningful to the audience, and which ones have the potential to be more universally understood as expressing certain emotions by the viewers.

Here is the code:

int happyX, happyY; // Position of “Happy” button
int sadX, sadY; // Position of “Sad” button
int helloX, helloY;
int hiX, hiY;
int chillX, chillY;
int flyX, flyY;
int rectSize = 200; // Diameter of rect
color rectColor, circleColor, baseColor;
color rectHighlight, circleHighlight;
color currentColor;
boolean happyOver = false;
boolean sadOver = false;
boolean helloOver = false;
boolean hiOver = false;
boolean chillOver = false;
boolean flyOver = false;

import processing.serial.*;
import cc.arduino.*;
Arduino arduino;

void setup() {
size(680, 160);

println(Arduino.list());
arduino = new Arduino(this, Arduino.list()[2], 57600);
for (int i = 0; i <= 13; i++)
arduino.pinMode(8, Arduino.OUTPUT);
arduino.pinMode(9, Arduino.OUTPUT);
arduino.pinMode(10, Arduino.OUTPUT);
arduino.pinMode(11, Arduino.OUTPUT);
arduino.pinMode(12, Arduino.OUTPUT);
arduino.pinMode(13, Arduino.OUTPUT);
rectColor = color(43,199,203);
rectHighlight = color(236,255,33);
baseColor = color(255,255,255);
currentColor = baseColor;
chillX = 20;
chillY = 20;
sadX = 240;
sadY = 20;
happyX = 460;
happyY = 20;
helloX = 20;
helloY = 90;
hiX = 240;
hiY = 90;
flyX = 460;
flyY = 90;
}

void draw() {
update(mouseX, mouseY);
background(currentColor);

if (happyOver) {
fill(rectHighlight);
} else {
fill(rectColor);
}
stroke(255);
rect(happyX, happyY, rectSize, rectSize/4);
textSize(20);
fill(255, 255, 255);
text(“happy”, happyX+70, happyY+32);

if (sadOver) {
fill(rectHighlight);
} else {
fill(rectColor);
}
stroke(255);
rect(sadX, sadY, rectSize, rectSize/4);
textSize(20);
fill(255, 255, 255);
text(“sad”, sadX+85, sadY+32);

if (helloOver) {
fill(rectHighlight);
} else {
fill(rectColor);
}
stroke(255);
rect(helloX, helloY, rectSize, rectSize/4);
textSize(20);
fill(255, 255, 255);
text(“hello”, helloX+75, helloY+32);

if (hiOver) {
fill(rectHighlight);
} else {
fill(rectColor);
}
stroke(255);
rect(hiX, hiY, rectSize, rectSize/4);
textSize(20);
fill(255, 255, 255);
text(“hi”, hiX+90, hiY+32);

if (chillOver) {
fill(rectHighlight);
} else {
fill(rectColor);
}
stroke(255);
rect(chillX, chillY, rectSize, rectSize/4);
textSize(20);
fill(255, 255, 255);
text(“chill”, chillX+80, chillY+32);

if (flyOver) {
fill(rectHighlight);
} else {
fill(rectColor);
}
stroke(255);
rect(flyX, flyY, rectSize, rectSize/4);
textSize(20);
fill(255, 255, 255);
text(“fly”, flyX+85, flyY+32);
}

void update(int x, int y) {
if ( overSad(sadX, sadY, rectSize, rectSize/4) ) {
sadOver = true;
happyOver = false;
helloOver = false;
hiOver = false;
chillOver = false;
flyOver = false;
} else if ( overHappy(happyX, happyY, rectSize, rectSize/4) ) {
happyOver = true;
sadOver = false;
helloOver = false;
hiOver = false;
chillOver = false;
flyOver = false;
}
else if ( overHello(helloX, helloY, rectSize, rectSize/4) ) {
helloOver = true;
sadOver = false;
happyOver = false;
hiOver = false;
chillOver = false;
flyOver = false;
}
else if ( overHi(hiX, hiY, rectSize, rectSize/4) ) {
helloOver = false;
sadOver = false;
happyOver = false;
hiOver = true;
chillOver = false;
flyOver = false;
}
else if ( overChill(chillX, chillY, rectSize, rectSize/4) ) {
helloOver = false;
sadOver = false;
happyOver = false;
hiOver = false;
chillOver = true;
flyOver = false;
}
else if ( overFly(flyX, flyY, rectSize, rectSize/4) ) {
helloOver = false;
sadOver = false;
happyOver = false;
hiOver = false;
chillOver = false;
flyOver = true;
}
else {
sadOver = happyOver = helloOver = hiOver = chillOver = flyOver = false;
}
}

void mousePressed() {
if (sadOver) {
Sad();
}
if (happyOver) {
Happy ();
}
if (helloOver) {
Hello ();
}
if (hiOver) {
Hi ();
}
if (chillOver) {
Chill ();
}
if (flyOver) {
Fly ();
}
}

boolean overHappy(int x, int y, int width, int height) {
if (mouseX >= x && mouseX <= x+width &&
mouseY >= y && mouseY <= y+height) {
return true;
} else {
return false;
}
}

boolean overSad(int x, int y, int width, int height) {
if (mouseX >= x && mouseX <= x+width &&
mouseY >= y && mouseY <= y+height) {
return true;
} else {
return false;
}
}

boolean overHello(int x, int y, int width, int height) {
if (mouseX >= x && mouseX <= x+width &&
mouseY >= y && mouseY <= y+height) {
return true;
} else {
return false;
}
}

boolean overHi(int x, int y, int width, int height) {
if (mouseX >= x && mouseX <= x+width &&
mouseY >= y && mouseY <= y+height) {
return true;
} else {
return false;
}
}

boolean overChill(int x, int y, int width, int height) {
if (mouseX >= x && mouseX <= x+width &&
mouseY >= y && mouseY <= y+height) {
return true;
} else {
return false;
}
}

boolean overFly(int x, int y, int width, int height) {
if (mouseX >= x && mouseX <= x+width &&
mouseY >= y && mouseY <= y+height) {
return true;
} else {
return false;
}
}
void Happy(){
arduino.analogWrite(8, 0); //tip down right wing
arduino.analogWrite(11, 180); //tip up right wing
arduino.analogWrite(12, 100); //up and down right wing
arduino.analogWrite(13, 0); //tip down left wing
arduino.analogWrite(9, 180); //tip up left wing
arduino.analogWrite(10, 80); //up and down left wing
}

void Sad (){
arduino.analogWrite(8, 180); //tip down right wing
arduino.analogWrite(11, 0); //tip up right wing
arduino.analogWrite(12, 30); //up and down right wing
arduino.analogWrite(13, 180); //tip down left wing
arduino.analogWrite(9, 0); //tip up left wing
arduino.analogWrite(10, 120); //up and down left wing
}

void Hello () {
arduino.analogWrite(8, 180); //tip down right wing
arduino.analogWrite(11, 0); //tip up right wing
arduino.analogWrite(12, 80); //up and down right wing
arduino.analogWrite(13, 180); //tip down left wing
arduino.analogWrite(9, 0); //tip up left wing
arduino.analogWrite(10, 90); //up and down left wing
delay (500);
arduino.analogWrite(8, 0); //tip down right wing
arduino.analogWrite(11, 180); //tip up right wing
arduino.analogWrite(12, 80); //up and down right wing
arduino.analogWrite(13, 0); //tip down left wing
arduino.analogWrite(9, 180); //tip up left wing
arduino.analogWrite(10, 90); //up and down left wing
delay (500);
arduino.analogWrite(8, 180); //tip down right wing
arduino.analogWrite(11, 0); //tip up right wing
arduino.analogWrite(12, 80); //up and down right wing
arduino.analogWrite(13, 180); //tip down left wing
arduino.analogWrite(9, 0); //tip up left wing
arduino.analogWrite(10, 90); //up and down left wing
delay (500);
arduino.analogWrite(8, 0); //tip down right wing
arduino.analogWrite(11, 180); //tip up right wing
arduino.analogWrite(12, 80); //up and down right wing
arduino.analogWrite(13, 0); //tip down left wing
arduino.analogWrite(9, 180); //tip up left wing
arduino.analogWrite(10, 90); //up and down left wing
}

void Hi () {
arduino.analogWrite(8, 180); //tip down right wing
arduino.analogWrite(11, 0); //tip up right wing
arduino.analogWrite(12, 90); //up and down right wing
arduino.analogWrite(13, 0); //tip down left wing
arduino.analogWrite(9, 180); //tip up left wing
arduino.analogWrite(10, 30); //up and down left wing
delay (500);
arduino.analogWrite(8, 0); //tip down right wing
arduino.analogWrite(11, 180); //tip up right wing
arduino.analogWrite(12, 90); //up and down right wing
arduino.analogWrite(13, 180); //tip down left wing
arduino.analogWrite(9, 0); //tip up left wing
arduino.analogWrite(10, 30); //up and down left wing
delay (500);
arduino.analogWrite(8, 180); //tip down right wing
arduino.analogWrite(11, 0); //tip up right wing
arduino.analogWrite(12, 90); //up and down right wing
arduino.analogWrite(13, 0); //tip down left wing
arduino.analogWrite(9, 180); //tip up left wing
arduino.analogWrite(10, 30); //up and down left wing
delay (500);
arduino.analogWrite(8, 0); //tip down right wing
arduino.analogWrite(11, 180); //tip up right wing
arduino.analogWrite(12, 90); //up and down right wing
arduino.analogWrite(13, 180); //tip down left wing
arduino.analogWrite(9, 0); //tip up left wing
arduino.analogWrite(10, 30); //up and down left wing
}

void Chill () {
arduino.analogWrite(8, 180); //tip down right wing
arduino.analogWrite(11, 0); //tip up right wing
arduino.analogWrite(12, 130); //up and down right wing
arduino.analogWrite(13, 180); //tip down left wing
arduino.analogWrite(9, 0); //tip up left wing
arduino.analogWrite(10, 30); //up and down left wing
}

void Fly () {
arduino.analogWrite(8, 0); //tip down right wing
arduino.analogWrite(11, 180); //tip up right wing
arduino.analogWrite(12, 130); //up and down right wing
arduino.analogWrite(13, 0); //tip down left wing
arduino.analogWrite(9, 180); //tip up left wing
arduino.analogWrite(10, 30); //up and down left wing
delay (600);
arduino.analogWrite(8, 180); //tip down right wing
arduino.analogWrite(11, 0); //tip up right wing
arduino.analogWrite(12, 0); //up and down right wing
arduino.analogWrite(13, 00); //tip down left wing
arduino.analogWrite(9, 180); //tip up left wing
arduino.analogWrite(10, 160); //up and down left wing
delay (600);
arduino.analogWrite(8, 0); //tip down right wing
arduino.analogWrite(11, 180); //tip up right wing
arduino.analogWrite(12, 130); //up and down right wing
arduino.analogWrite(13, 0); //tip down left wing
arduino.analogWrite(9, 180); //tip up left wing
arduino.analogWrite(10, 30); //up and down left wing
delay (600);
arduino.analogWrite(8, 180); //tip down right wing
arduino.analogWrite(11, 0); //tip up right wing
arduino.analogWrite(12, 0); //up and down right wing
arduino.analogWrite(13, 00); //tip down left wing
arduino.analogWrite(9, 180); //tip up left wing
arduino.analogWrite(10, 160); //up and down left wing
}

Alicja’s Animatronics Assignment 6: “Seeing ourselves in the computer: How we relate to technologies”

The authors of “Seeing ourselves in the computer: How we relate to technologies” claim that people look for human-like features and behaviors in everything they come across in their daily lives. This applies both to squiggles on a piece of paper, in which, according to Topffer’s Law, we inevitably want to see a face, to computers, which we tend to subconsciously bestow with personalities.

Knowing that this is the way our mind works definitely helps me as I think about my own project. It makes me less worried about trying to create a realistic character, because, as the article shows, humans can easily treat an inanimate object as if it was a person if the very basic conditions of resemblance are met. Naturally, I do also realize that the writers highlight the importance of personality development in enhancing the interaction between the audience and the object, and I am planning to work on this area a little more.

Alicja’s Animatronics Assignment 5a: “It’s the Way You Tell It! What Conversations of Elementary School Groups Tell Us About the Effectiveness of Animatronic Animal Exhibits” by Sue Dale Tunnicliffe

Sue Dale Tunnicliffe in her article “It’s the Way You Tell It! What Conversations of Elementary School Groups Tell Us About the Effectiveness of Animatronic Animal Exhibits” claims that the use of animatronic creatures in exhibitions doesn’t in itself guarantee the interest of elementary school children visiting it. By studying two different models – one installed in a zoo, and the other one in a museum, they determined that what holds crucial importance is that there is a context behind and a story told by each figure. Otherwise, the educational value most likely will not be transmitted to the students.

As it comes to the questions from last week, here are my answers in reference to this reading:

  1. All animatronics have an audience. What is the main emotion you want to transmit to them?
    Tunnicliffe highlights how important the consideration of the audience is in imparting an educational message to the students. I do not see my final project, however, as having such a didactic purpose. As my goal is much simpler than that – just evoking joy or amusement in the spectators, I think that making sure that this emotion is embodied by the animatronic figure I create should be enough to achieve this purpose.
  2. Your character lives in a world, has a personality and a story behind. Which one? Does it require a defined stage to be effective?
    Tunnicliffe seems to imply that the existence of a stage in the museum helped the students get more out of the experience of the installation, which makes me reconsider my previous decision not to construct one for my character. I think that perhaps, as the project develops, I should observe people’s reaction to my penguin, and make a choice whether to build one or not based on that.
  3. How does the participatory design methodology work in your own animatronic project?
    The author does not really describe how these exhibitions were constructed, but she does observe how the users interact with them, pointing out areas for potential improvements. This just highlights again how important user testing is. I’ll be sure to take that into account while working on my project.
  4. Are there artists or projects that influence your creation?
    Tunnicliffe does not explore this question at all in her paper, but her observations on the two installations will definitely influence the way I think about my own creation.

Alicja’s Animatronics Assignment 5b: Inverse Kinematics+Audio Processing

I modelled my inverse kinematics robot on a tutorial uploaded to Trossen Robotics, a Wikipedia article on Delta Robots and the prototype and codes provided by my Professor, Rudi. I started the construction with just a white plastic board that was going to serve as the base for my robot. I used a ruler and a pen to delineate an equilateral triangle on it, and marked the points on its sides where I was to place my servos:

DSC_1091

After that, I got my three servos ready by screwing short metal plates to their arms, and then adding longer pieces to their ends. I created the joints with rubber bands, so that there is an elasticity to them:

DSC_1092DSC_1094

Once that was done, I hot-glued the servos onto the board and created the final joint by connecting the ends of the long metal plates of each servo together:

DSC_1096DSC_1098

The next step that needed to be done was to provide an external source of power, because the Arduino on its own could not provide the current required to run 3 servos. I decided to use AC/DC 9V Adapter, so I soldered two wires to a plug that would accept it, and then added a transistor to my circuit, so that just the right amount of current passes through the servos:

DSC_1107DSC_1108

After that, I plugged my servos in (I used plugs 8, 9 and 10) and loaded the previously linked Arduino code onto my board. Then, I ran the processing sketch and here is what I got:

Frankly, the robot surprised me with how precisely it followed the positions I set out for it by moving my mouse. Now it was time however to add sound to the project.

This part must have taken me the longest time, even though it shouldn’t have. For some reason, I encountered various problems running the Minim sketches provided by the Professor here.

Finally though, after a couple tweaks, like replacing the sample song with a song I had on my computer, and adding the integer values of angleLS and angleRS, I managed to make my robot respond to the sound that was playing:

And here is the Processing code that I used to control the servos:

import ddf.minim.*;
import processing.serial.*;
import cc.arduino.*;
import controlP5.*;
import processing.sound.*;

ControlP5 controlP5;
Arduino arduino;
SoundFile file;
int servoAngle = 90;

Minim minim;
AudioPlayer song;

void setup()
{
size(512, 400, P3D);
println(Arduino.list());
arduino = new Arduino(this, Arduino.list()[2], 57600);
for (int i = 0; i <= 13; i++)
arduino.pinMode(9, Arduino.OUTPUT);
arduino.analogWrite(9, 0);
arduino.analogWrite(10, 0);
arduino.analogWrite(8, 0);

minim = new Minim(this);

song = minim.loadFile(“DancingInTheDark.mp3”, 1024);
song.setPan(1);
song.loop();
// use the getLineIn method of the Minim object to get an AudioInput
// in = minim.getLineIn();
}

void draw()
{
background(0);
stroke(255);

float energyR = 0;
float energyL = 0;

// draw the waveforms so we can see what we are monitoring
for (int i = 0; i < song.bufferSize() – 1; i++)
{
line( i, 50 + song.left.get(i)*50, i+1, 50 + song.left.get(i+1)*50 );
line( i, 150 + song.right.get(i)*50, i+1, 150 + song.right.get(i+1)*50 );
energyL = energyL + abs(song.left.get(i));
energyR = energyR + abs(song.right.get(i));
}

println ( “left ” + energyL);
println ( “right ” + energyR);

int angleL = (int)map(energyL, 0, 1000, 0, 180);
int angleR = (int)map(energyR, 0, 1000, 0, 180);

int angleLS = (int) energyL;
int angleRS = (int) energyR;

line( 0, 250, angleL, 250);
text( angleL, 350, 250);
line( 0, 350, angleR, 350);
text( angleR, 350, 350);
arduino.analogWrite(9, 180-angleLS);
arduino.analogWrite(10, 180-angleRS);
arduino.analogWrite(8, angleLS);

// String monitoringState = song.isMonitoring() ? “enabled” : “disabled”;
// text( “Input monitoring is currently ” + monitoringState + “.”, 5, 15 );
}

Alicja’s Assignment 4b: Sketch

As I described in the previous post, I would like to make a penguin that can talk, close and open eyes and hopefully move the wings as well. My design is heavily inspired by Furbies, because I think the shape that these toys have (no separate head, just one body unit) would make its execution easier.

Here are my sketches:

PINGWIN1PINGWIN2  PINGWIN4 PINGWIN3

I think that through synchronizing the three actions of opening and closing the eyes, the moving of the mouth, and the spreading of the wings the penguin could showcase a range of emotions, from excitement and joy to sleepiness and boredom.

Alicja’s Animatronics Assignment 4a: Character

  1. All animatronics have an audience. What is the main emotion you want to transmit to them?
    I would like my character to simply bring joy and amuse the audience.
  2. Your character lives in a world, has a personality and a story behind. Which one? Does it require a defined stage to be effective?
    I would like to continue with the penguin theme, since I have already done so much in this direction. I want to keep it cute and child-like (similar to what I did in the animation assignment). My interaction inspiration is Furby and the way I played with it when I was younger.
    As it comes to the background story, I think it could be funny if the penguin was Argentine (the first and only time I saw penguins in their natural habitat was when I went to Ushuaia, South Argentina). To express such origin, the animal could talk about very “Argentine” things, like craving alfajores and dulce de leche, drinking mate, etc.
    At the same time though, I would like to keep a certain universality to my project, meaning that I would not want it to require a specific stage.
  3. How does the participatory design methodology work in own animatronic project?
    In order to make the process of bringing this character to life more participatory, I think I could ask my peers to user test it and give me feedback, as well as consult the opinions of fellows and professors as I am developing it.
  4. Are there artists or projects that influence your creation?
    Definitely the Pingu cartoon and Furby toy have been great influences.

Alicja’s Animatronics Assignment 3c: “Physical Embodiments for Mobile Communication Agents”

Stefan Marti and Chris Schmandt in “Physical Embodiments for Mobile Communication Agents” discuss the creation and testing of their animatronics phone agents, which they designed to look like animals (one of the models was a parrot, another one a bunny and the last one a squirrel) and to interact with humans using both animal- and human-like gestures.

Interestingly, while these robots were described as “cute” by some of the people that tested them, Marti and Schmandt did not design them purely as objects of entertainment, but instead saw them as intelligent machines that could help solve a real-life communication problem. Nowadays, the researchers argue, cell phones are so ubiquitous and yet something about interacting with them in public fails to feel organic, and instead can lead to annoyance (for example, when a call disrupts a family meal). Their solution to this problem lies in redefining the experience of having an incoming call. In the place of a loud ringtone, they envision an animatronic animal waking up, verifying the caller ID, making decisions based on the information available to it and all the while communicating with the user, taking advantage of both verbal and non-verbal cues. In their opinions, supported by their research findings, such interaction proves to be less disruptive than a more traditional ringtone.

Reading about the project and the authors’ intentions behind it, I was wondering if the use of an animatronic animal could also mitigate the negative interactions most of the people have everyday with their alarm clocks. In other words, would being woken up by a bunny feel better than just hearing the ringtone?

Alicja’s Animatronics Assignment 3b: Eyes

This week we had to create an eye mechanism for a puppet and I decided to make eyes that open and close. Here are my sketches:
DSC_1053DSC_1054

 

 

 

 

 

I got inspired by this design.

I started out by bending the wire and fixing the ping-pong balls onto it:

DSC_1005DSC_1007DSC_1022

The shape of the frame in the last photo has been modified, so that it can move within the foam openings, which were first marked on the back of the foam and then cut out:

DSC_1008DSC_1014

After that, I tried to see whether the eyes would fit:

DSC_1017DSC_1019

Once that was confirmed, I sewed black fabric eyelids on top of the metal eyelids, and then added wires entering the ping-pong ball at the bottom and coming out through its side, which I used to fix the eyes onto the foam:

DSC_1027DSC_1028DSC_1032

Here they are fixed, with pupils hot-glued in the front:

DSC_1046

Now the most important part came, which was attaching the motor to the eye mechanism, so that it moves automatically. I used this sketch, made by Professor Rudy, in the next few steps:

DSC_1033

First, I attached a long piece of wire to the hinge of the eye frame, then I used instamorph to connect the wire to the servo motor:

DSC_1034DSC_1039DSC_1040

Once that was finished, I needed to make sure that the motor did not move, so I used two plaques to fix it in place:

DSC_1043DSC_1044

Finally, I started hot-gluing little pieces of fabric onto the front:

DSC_1047

And here it is, the mechanism working:

As can be seen in the video, the method is not perfect, and the penguin does not quite close his eyes, but it’s not far from doing so and the action itself looks pretty nice I think. The project could be improved if the original eye construction worked better (since it wasn’t easily moving before I added the motor, it wasn’t going to do so after that either). Also, a nicer head shape and a better method to decorate it could help too.

Alicja’s Afloat Documentation

In Short: What Is Afloat and Why Create It?

In my capstone project, which I titled Afloat, I explored the relation between visuals and sound. It was composed of three TV Screens, two webcams, one video and two soundtracks, one of which was a poetic travelogue and the other a memoir of a relationship.

My inspirations for the project included experimental films like Chris Marker’s Sans Soleil and Chantal Akerman’s News From Home, as well as the works of some video artists, such as W.A.N.T // WE ARE NOT THEM by Atif Ahmad, Cell by James Alliban and Keiichi Matsuda, and China Town by Lucy Raven. As I am fascinated with the medium of film, I wondered whether letting my audience intreact with the three screens, and in this way shaping the narrative, would make their experience of my work more personal.

Visuals and Sound: Shooting, Writing, Assembling, Reassembling, Rewriting, …

When I first started shooting the videos I ended up using for this projectI was not thinking about Afloat yet. Mesmerized by the scenes unfolding in front of my eyes, I just wanted to document my amazement, subconsciously knowing that the footage taken was not destined to oblivion in the depths of my external drive.

As I was filming, I was writing as well. Often, words and images come to me in pairs, sometimes complementing each other, sometimes clashing carelessly, all the while making me re-observe and re-think all that I see.

That was from June to mid-November last year, five-and-a-half months of trying to make sense of Latin America, while teaching English in Nicaragua, studying in Argentina and traveling in Chile. At the back of my head, I must have thought I was drafting a second installment to Offmy video about the other America, even though the idea of Afloat was actually older than that, and traced back to the Cooking with Sound course I took at ITP in the Fall of 2015.

We met in the afternoons and always started the classes with listening to some kind of sound. That one time, however, the speakers initially refused to work and instead of hearing the wind run through an empty, unfinished house, we just watched it do so. My mind could not stand the silence of the visuals; soon I had a pretty good idea of what this building sounded like; it was an eerie, high-pitched and almost-opera-like song. Then the speakers got fixed and the shock came. It sounded nothing like that.

The memory of this revelation clearly formed the root of Afloat. That’s why I needed one screen of silence, to let the imagination work first. That is also why I needed two scripts – I did not want one absolute, right soundtrack; I wanted an endless amount, created by each participant remixing their intuition with my contradictions. I pictured three screens, two scripts and one video – and I had the spine of my project.

The meat, of course, was the July-to-mid-November footage, amassed at the beginning of this term and complemented with some shots from Austria, Shanghai and New York. The bones I formed from bits and pieces of travel writings – dialogues, monolgues, character sketches, and kept resculpting them until almost the very end, so that they fit perfectly, and achieve a balance. The dialogue started with vibrant colors, smells and sounds and ended on greynness; the monologue shifted from cautious distance to celebratory ease. They both played to the same set of visuals.

It was my sister, Ola Jader, and her friend, Jordan Brancker, who gave them their voices. I considered adding another shade, and involving a third person for the monologue, but I thought that the thematic links between the two stories needed to be highlighted rather than obscured, and so Ola it was, again.

I added few embellishments to their voices: some car-travel white noise, the sound of water flowing under the shower, the waves hitting the shore and Nicaraguan bus music, a genre in itself. This modesty in the soundscape was made up in the visuals; I recolored every shot, itensifing all the hues to match what they were in my memory. I lined them all up with the words and overlayed the chosen, to illustrate and expand on the spoken. I endlessly cut and extended them, remapped their speeds and played them in reverse; until it all fit perfectly. Or almost so – my final versions transport me to a different universe, and let me negotiate my position; hopefully, they do so for my audience as well. And still, some of the shots I would like to replace, and some of the words I would like to re-record. It’s never not a work in progress, I guess.

Hardware, Software and the Audience: Tracking, Pacing, Fixing

On the technological side, the process could not be more different. Instead of intuitively delineating the path as I went, orienting myself towards an increasingly clear destination, I set out having a very specific goal in mind, and I just needed to figure out how to get there. That goal was an organic interaction between the audience and my work. 

I knew then that I needed something seamless. Buttons would not work – it would feel like operating a remote control, and I wanted the eyes, not the hands, to take control. They could not be pressure sensors either, lying down in strategic places on the floor and waiting to be stepped on, because pressure exists independently of direction, and signalling that somebody is standing close to the screen would not necessarily equate to that somebody looking at it.

Then I thought about Kinect and using the entire body as a cursor, and I believe this could have worked,  but when I heard of OpenCV from Tyler, it instantly seemed like the best option; it was the most direct way to read which screen the viewer was watching, since it only recognizes faces looking straight at the camera (i.e. no profiles). I hoped that would mean that the sound would only play when the audience was actually facing the screen, and stop when they turned away – and that’s exactly what happened.

Luckily, the Processing Library proved to be quite easy to use, especially since I could consult the open-source code found online (especially this one by ManaXmizery). Here is the code I drafted and used during the final presentation:

import gab.opencv.*; //importing the OpenCV library
import processing.video.*; //importing the Video library
import java.awt.*; //importing the Java library
import processing.sound.*; //importing the Sound library
SoundFile file; //initiating the audio
Movie myMovie; //initiating the video

Capture video; //initiating camera capture
OpenCV opencv; //initiating OpenCV

void setup() {
frameRate(60); //setting the frame rate of the video
fullScreen(); //setting the playback to be full screen
myMovie = new Movie(this, “capfinvid.mov”); //loading the video
scale(1.0); //setting the scale of the video to be 1.0 in the case of the big TV screens; when I tested on computers, I needed to scale it down to 0.67
myMovie.speed(7.5); //I am speeding up the rate of the playback here to mitigate for the slowness of the processing power of my computer; this was the only way I found to make the sound and the image run synchronized
myMovie.loop(); //playing the video

video = new Capture(this, 640/2, 480/2); //setting up the camera capture
opencv = new OpenCV(this, 640/2, 480/2); //setting up OpenCV
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  //loading the Cascade that recognizes faces directed straight at the webcam

video.start(); //starting the camera capture

file = new SoundFile(this, “dialocapfin.mp3”); //loading the soundtrack
file.loop(); //playing the soundtrack
file.amp(0.0);// setting the volume to 0
}

void draw() {
opencv.loadImage(video); //asking OpenCV to read the input from the webcam
image(myMovie, 0, 0); //displaying the video
Rectangle[] faces = opencv.detect(); //detecting faces looking at the webcam 
println(faces.length); // text indicator telling us whether there is anybody facing the screen or not

if (faces.length>0) { //checking if a face is detected
file.amp(1.0);   // turning the volume up to 1 if  a face is detected
} else {
file.amp (0.0); // turning the volume down to 0 if  no faces are detected
}
}
void movieEvent(Movie m) {
m.read();
}
void captureEvent(Capture c) {
c.read();
}

It ran on three computers at the same time, with only minor modifications in the “file = new SoundFile(this, “dialocapfin.mp3″)” command; while in the case of the second machine just the name of the soundtrack was changed, in the case of the third this line was deleted altogether, since there was no sound to be played back for that screen. In this way, the interactions between the TVs and the audience were exactly the same, the only thing that differed was the soundtrack.

Once I had the code ready, I set up 3 TV screens in a room, the first one facing the entrance, and the other two angled, their centers forming a triangle. It looked like this:

DSC_0975

The TV right across from the door played no sound. The idea was that the audience would first see just the images, forming their own understanding of what they mean, and then, once they turn around, get a chance to reconsider their interpretation by consulting the other two screens, offering two other narratives.

For the installation to work, I had to place two webcams on top of the two TVs that included the soundtrack, and connect all 3 screens to computers. A technological problem I faced already at this point was how to run the sketches so that the visuals are in sync between the three TVs. I ended up using two wireless mouses to be able to quickly turn all of the Processing sketches on one after another, which at least made the beginning of the video run in a pretty synchronized way, though by the end of the work the images on the TVs differed significantly, because, as it turned out, I used computers with disparate processing powers. As a result, I had to face sync issues between the visuals and the sound as well, which I tried to fix by increasing the movie speed, but for some reason it worked on one of the computers only.

Another potential code fix that I explored was just muting the video’s audio channel, instead of incorporating the two separately, using the “mov.volume ()” command. However, this seemed to respond to the face detection only during the initial 20s of running the Processing sketch; after that, the program crashed.

Perhaps the best solution then in the future would be to just use three computers of exactly the same processing power and try to mitigate the sound-video sync by increasing the movie playback speeds on all of them.

The last, but also extremely significant technical issue my project suffered from were the problems with OpenCV itself. While it worked perfectly for me, it was not as good at recognizing other people’s faces, which puzzled me. Turning the light on in the room to an extent helped with this problem, because in this way more information was supplied to the webcam, but the trade-off with that was the experience of the video itself, which was better in low light. I wonder if there is a different way to usolve this issue or whether I should look into other face recognition technologies.

Despite the mentioned problems, however, my project functioned. As users tested it, I observed how different the approach and timing of each person was, and that made me happy, because it meant (as I had hoped) that for each viewer the experience of my installation varied at least slightly. Some people spent more time with the first screen, while others lost patience for it quickly. All of them, however, found their way to the other two displays, which was crucial for the project, and meant that the basic interaction of navigating different soundtracks for the same set of visuals worked well. Once the sound was activated, some people listened intensely, while others preferred to switch between the two narratives fast, perhaps testing the technology or just wanting to know what the differences were. After that, a couple participants kept going back to the first screen, but most of the users just focused on the other two after initially turning away from the display that they saw first.

All of the viewers explored the installation for the whole duration of the video, which was a little less than 3 minutes, which means that the project managed to hold their attentions for that long. Overall, most of the people seemed to enjoy Afloat, and a lot of the viewers commented on the visuals; the narrative was rarely mentioned though, and that perhaps means I should work on highlighting it more. Sadly, another reaction, besides interest, that my installation engendered was confusion, which occurred when previously-described OpenCV issues arose. As the system sometimes failed to response smoothly to the spectators’ movements, they grew uncertain as to what the interaction entailed. In the moments when it ran better though, the audience seemed to quickly grasp how the system worked and play with it accordingly; that means I really need to fix the face recognition issue to make the experience of the installation better.

Compared to this problem, the synching diffuculties I described earlier did not seem to affect the reception of my work as severely. While it was incovenient for me to have to restart all the sketches before each viewing, the audience’s experience was not influenced by this additional action I had to take before they could watch my project. The other synching issue, which occurred between the images and the sound, never elicited a comment from the viewers either. This must mean that while it bothered me, it did not provoke a similar reaction from the spectators. Consequently, I think that my top priority to improve on the project now is to optimize the face recognition software, so that it runs seamlessly, and only then to tackle the remaining issues.

Finally, even though I still have lots of improvements to make, I definitely reached my goal for the project, letting each spectator experiment with visuals and sound to create her or his own individual understanding of my work.

Here is my presentation.

And here are the two videos with separated soundtracks:

Alicja’s Animatronics Assignment 3a: “Facial expressions of emotion: an old controversy and new findings”

Paul Ekman in “Facial expressions of emotion: an old controversy and new findings” relays the studies about the universality of facial expressions shared across different cultures, in this way proving that they are not learned or dependent on the culture the person grew up in. Furthermore, it appears that certain facial muscle movements are intrinsically linked to involuntary reactions, meaning that when people were asked to perform the movements associated, for example, with anger, then their vitals express the sensation of anger. One interesting point that the author makes is that there is something called a “Duchenne smile,” which involves moving the outer part of the muscles surrounding the eyes and in this way expresses true enjoyment as opposed to fake happiness or a grin, which do not utilize this particular muscle, since it cannot be moved consciously.

F.A.C.S., which stands for Facial Action Coding System, for me is a system through which each person sends informations that can be easily decoded by the recipient, meaning that there are certain facial expressions, created by slight muscle movements, that correspond to very specific emotions.

Also, here’s an insta morph flower mounted to the servo motor: