Alicja’s Assignment 4b: Sketch

As I described in the previous post, I would like to make a penguin that can talk, close and open eyes and hopefully move the wings as well. My design is heavily inspired by Furbies, because I think the shape that these toys have (no separate head, just one body unit) would make its execution easier.

Here are my sketches:

PINGWIN1PINGWIN2  PINGWIN4 PINGWIN3

I think that through synchronizing the three actions of opening and closing the eyes, the moving of the mouth, and the spreading of the wings the penguin could showcase a range of emotions, from excitement and joy to sleepiness and boredom.

Alicja’s Animatronics Assignment 4a: Character

  1. All animatronics have an audience. What is the main emotion you want to transmit to them?
    I would like my character to simply bring joy and amuse the audience.
  2. Your character lives in a world, has a personality and a story behind. Which one? Does it require a defined stage to be effective?
    I would like to continue with the penguin theme, since I have already done so much in this direction. I want to keep it cute and child-like (similar to what I did in the animation assignment). My interaction inspiration is Furby and the way I played with it when I was younger.
    As it comes to the background story, I think it could be funny if the penguin was Argentine (the first and only time I saw penguins in their natural habitat was when I went to Ushuaia, South Argentina). To express such origin, the animal could talk about very “Argentine” things, like craving alfajores and dulce de leche, drinking mate, etc.
    At the same time though, I would like to keep a certain universality to my project, meaning that I would not want it to require a specific stage.
  3. How does the participatory design methodology work in own animatronic project?
    In order to make the process of bringing this character to life more participatory, I think I could ask my peers to user test it and give me feedback, as well as consult the opinions of fellows and professors as I am developing it.
  4. Are there artists or projects that influence your creation?
    Definitely the Pingu cartoon and Furby toy have been great influences.

Alicja’s Animatronics Assignment 3c: “Physical Embodiments for Mobile Communication Agents”

Stefan Marti and Chris Schmandt in “Physical Embodiments for Mobile Communication Agents” discuss the creation and testing of their animatronics phone agents, which they designed to look like animals (one of the models was a parrot, another one a bunny and the last one a squirrel) and to interact with humans using both animal- and human-like gestures.

Interestingly, while these robots were described as “cute” by some of the people that tested them, Marti and Schmandt did not design them purely as objects of entertainment, but instead saw them as intelligent machines that could help solve a real-life communication problem. Nowadays, the researchers argue, cell phones are so ubiquitous and yet something about interacting with them in public fails to feel organic, and instead can lead to annoyance (for example, when a call disrupts a family meal). Their solution to this problem lies in redefining the experience of having an incoming call. In the place of a loud ringtone, they envision an animatronic animal waking up, verifying the caller ID, making decisions based on the information available to it and all the while communicating with the user, taking advantage of both verbal and non-verbal cues. In their opinions, supported by their research findings, such interaction proves to be less disruptive than a more traditional ringtone.

Reading about the project and the authors’ intentions behind it, I was wondering if the use of an animatronic animal could also mitigate the negative interactions most of the people have everyday with their alarm clocks. In other words, would being woken up by a bunny feel better than just hearing the ringtone?

Alicja’s Animatronics Assignment 3b: Eyes

This week we had to create an eye mechanism for a puppet and I decided to make eyes that open and close. Here are my sketches:
DSC_1053DSC_1054

 

 

 

 

 

I got inspired by this design.

I started out by bending the wire and fixing the ping-pong balls onto it:

DSC_1005DSC_1007DSC_1022

The shape of the frame in the last photo has been modified, so that it can move within the foam openings, which were first marked on the back of the foam and then cut out:

DSC_1008DSC_1014

After that, I tried to see whether the eyes would fit:

DSC_1017DSC_1019

Once that was confirmed, I sewed black fabric eyelids on top of the metal eyelids, and then added wires entering the ping-pong ball at the bottom and coming out through its side, which I used to fix the eyes onto the foam:

DSC_1027DSC_1028DSC_1032

Here they are fixed, with pupils hot-glued in the front:

DSC_1046

Now the most important part came, which was attaching the motor to the eye mechanism, so that it moves automatically. I used this sketch, made by Professor Rudy, in the next few steps:

DSC_1033

First, I attached a long piece of wire to the hinge of the eye frame, then I used instamorph to connect the wire to the servo motor:

DSC_1034DSC_1039DSC_1040

Once that was finished, I needed to make sure that the motor did not move, so I used two plaques to fix it in place:

DSC_1043DSC_1044

Finally, I started hot-gluing little pieces of fabric onto the front:

DSC_1047

And here it is, the mechanism working:

As can be seen in the video, the method is not perfect, and the penguin does not quite close his eyes, but it’s not far from doing so and the action itself looks pretty nice I think. The project could be improved if the original eye construction worked better (since it wasn’t easily moving before I added the motor, it wasn’t going to do so after that either). Also, a nicer head shape and a better method to decorate it could help too.

Alicja’s Afloat Documentation

Afloat: What is it and why?

In my capstone project, which I titled Afloat, I explored the relation between visuals and sound. It was composed of three TV Screens, two webcams, one video and two soundtracks, one of which was a poetic travelogue and the other a memoir of a relationship.

My inspiration for the project included experimental films like Chris Marker’s Sans Soleil and Chantal Akerman’s News From Home, as well as the works of some video artists, such as W.A.N.T // WE ARE NOT THEM by Atif Ahmad, Cell by James Alliban and Keiichi Matsuda, and China Town by Lucy Raven. As I am fascinated with the medium of film, I wondered whether letting my audience intreact with the three screens, and in this way shaping the narrative, would make their experience of my work more personal.

Visuals and Sound: Shooting, Writing, Assembling, Reshooting, Rewriting, …

When I first started shooting the videos I ended up using for this projectI was not thinking about Afloat yet. Mesmerized by the scenes unfolding in front of my eyes, I just wanted to document my amazement, subconsciously knowing that the footage taken was not destined to oblivion in the depths of my external drive.

As I was filming, I was writing as well. Often, words and images come to me in pairs, sometimes complementing each other, sometimes clashing carelessly, all the while making me re-observe and re-think all that I see.

That was from June to mid-November last year, five-and-a-half months of trying to make sense of Latin America, while teaching English in Nicaragua, studying in Argentina and traveling in Chile. At the back of my head, I must have thought I was drafting a second installment to Offmy video about the other America, even though the idea of Afloat was actually older than that, and traced back to the Cooking with Sound class I took at ITP in the Fall of 2015.

We met in the afternoons in a classroom with one glass wall and the rest made o

but I also incorporated some shots from Austria, Shanghai and New York. As I was filming, I was also writing two scripts, one being a monologue detailing my travels in South America, and the other a dialogue unveiling the end of a relationship. I took my time redrafting them, so that the differing stories could fit the same set of visuals. Once I had them ready, I asked my sister, Ola Jader, to record the two soundtracks, one on her own, and the other with a friend of hers, Jordan Brancker. I considered using a different person’s voice for the monologue, but I decided that the thematic links between the two soundtracks were strong and would benefit from highlighting these connections by using the same voice.

At the editing stage, I spent a lot of time color correcting and synchronizing the separate soundtracks with the images. I also added background sounds, which I either took from my other footage or recorded (like the water flowing under the shower). I am quite happy with the final videos, even though I would like to replace some shots with new footage in the future and perhaps re-record the voiceover as well, as at the moment it appears a little unbalanced, and some parts of it differ significantly from each other.

When it comes to the technological side of my project, at the beginning I considered using Kinect, but in the end I decided to work with OpenCV thanks to the advice of Tyler. This Processing Library proved to be quite easy to use, especially since I could consult the open-source code found online (especially this one by ManaXmizery). I really liked the fact that it only recognizes faces looking straight at the webcam, because it let me program the sound to only play when the viewer was actually facing the screen, and to stop when they turned away. Here is the code I drafted and used during the final presentation:

import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import processing.sound.*;
SoundFile file;
Movie myMovie;

Capture video;
OpenCV opencv;

void setup() {
frameRate(60);
fullScreen();
myMovie = new Movie(this, “capfinvid.mov”);
scale(1.0);
myMovie.speed(7.5); //I am speeding up the rate of the playback here to mitigate for the slowness of the processing power of my computer; this was the only way I found to make the sound and the image run synchronized
myMovie.loop();

video = new Capture(this, 640/2, 480/2);
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();

file = new SoundFile(this, “dialocapfin.mp3”);
file.loop();
file.amp(0.0);
}

void draw() {
opencv.loadImage(video); //loading the camera video
image(myMovie, 0, 0);
Rectangle[] faces = opencv.detect(); //detecting faces
println(faces.length);

if (faces.length>0) {
file.amp(1.0);  //if a face is detected, turn the volume up to 1
} else {
file.amp (0.0); //if no faces are detected, turn the volume down to 0
}
}
void movieEvent(Movie m) {
m.read();
}
void captureEvent(Capture c) {
c.read();
}

Between the different screens, I only changed the name of the recording in the new SoundFile command line or deleted the soundtrack altogether, since the first screen contained no sound. Once I had the code ready, I set up the 3 TVs in a room, the first one facing the entrance, and the other two angled, and forming a sort of a triangle. It looked like this:

DSC_0975

The idea was that the audience would first see the images without any sound, forming their own understanding of what they mean, and then, once she turns around, gets a chance to reconsider her interpretation by consulting the two other screens, offering two other narratives.

For the installation to work, I had to place two webcams on top of the two TVs that included the soundtrack, and connect all 3 screens to computers. A technological problem I faced already at this point was how to run the sketches so that the visuals are in sync between the three TVs. I ended up using two wireless mouses to be able to quickly turn all of the Processing sketches on one after another, which at least made the beginning of the video to run in a pretty synchronized way, though by the end the work the images on the TVs differed significantly, because, as it turned out, I used computers with disparate processing power. As a result, I had to face sync issues between the visuals and the sound as well, which I tried to fix by increasing the movie speed, but it worked on one of the computers only. Another method I explored was, instead of incorporating the sound and video separately, maybe it would have been better to just mute the video volume, but this way did not work for me for some reason.

The last, but also extremely significant issue my project suffered from were the problems with running OpenCV. While it worked perfectly for me, it was not as good at recognizing other people’s faces, which puzzled me. Turning the light on in the room to an extent helped with this problem, because in this way more information was supplied to the webcam, but also to a certain extent inhibited the act of watching the installation. I wonder if there is a different way to remediate it or whether I should look into other face recognition technologies.

As users tested my project, I realized how different the approach and timing of each person was, and that made me happy, because that meant, as I had hoped, that for each viewer the experience of my installation was at least slightly different. Even though I still have lots of improvements to make, I think that I reached my goal in the project, letting my audience experiment with visuals and sound and create their individual understandings of my work.

Here is my presentation.

Here are the two videos with separated soundtracks:

Alicja’s Animatronics Assignment 3a: “Facial expressions of emotion: an old controversy and new findings”

Paul Ekman in “Facial expressions of emotion: an old controversy and new findings” relays the studies about the universality of facial expressions shared across different cultures, in this way proving that they are not learned or dependent on the culture the person grew up in. Furthermore, it appears that certain facial muscle movements are intrinsically linked to involuntary reactions, meaning that when people were asked to perform the movements associated, for example, with anger, then their vitals express the sensation of anger. One interesting point that the author makes is that there is something called a “Duchenne smile,” which involves moving the outer part of the muscles surrounding the eyes and in this way expresses true enjoyment as opposed to fake happiness or a grin, which do not utilize this particular muscle, since it cannot be moved consciously.

F.A.C.S., which stands for Facial Action Coding System, for me is a system through which each person sends informations that can be easily decoded by the recipient, meaning that there are certain facial expressions, created by slight muscle movements, that correspond to very specific emotions.

Also, here’s an insta morph flower mounted to the servo motor:

Alicja’s Animatronics Assignment 2b

My sketch for this week’s assignment looked like this:

DSC_0999

I wanted to continue with the penguin theme and create an animal that says “A penguin’s got to do what a penguin’s got to do” (since we had to include thought-provoking quote and I think this one to a certain extent is autotemathic in that this penguin truly has no choice but to do what it’s programmed to do; whether humans have a similar lack of choice I leave up for discussion).

In class I quickly found out though that it would be extremely hard to do a nice whole-figured penguin in such a short time frame, so instead I opted for doing just the head for now. The challenge even with that is that I wanted it to be three-dimensional and so, with an enormous help from the professor, I had this just-head sketches to work from:

DSC_1000

From this point on, I could focus on the actual fabrication:

DSC_0953DSC_0956

I used 3 pieces of flexible thick wire and 3 of the thin one to create the sphere (which of course is very far from a perfect sphere). Then I used two more for the beak, and I installed the servo at the one of one of them. Here is the basic mechanism for speaking done:

What still bothered me with this model was that the beak didn’t look exactly like a beak so I added three-dimensionality to it with the thinnest wires and paper tape. I also made eyes and gave my penguin a voice:

What bothers me about my penguin is that it looks extremely creepy, and I had wanted it to be more in the cute category. I guess maybe if I added nicer eyes and some kind of fabric and body it might have looked better. It’s nice to see it working though, even though the sounds are not well-synchronized with the mouth movements.

Here is the code I used:

import processing.serial.*;
import cc.arduino.*;
import controlP5.*;
import processing.sound.*;
ControlP5 controlP5;
Arduino arduino;
SoundFile file;
int servoAngle = 90;

void setup() {
size(400,400);
println(Arduino.list());
arduino = new Arduino(this, Arduino.list()[2], 57600);
for (int i = 0; i <= 13; i++)
arduino.pinMode(4, Arduino.OUTPUT);
arduino.analogWrite(4, 0);
//controlP5 = new ControlP5(this);
//controlP5.addSlider(“servoAngle”,0,180,servoAngle,20,10,180,20);
file = new SoundFile(this, “penguin2.aiff”);
file.play();
arduino.analogWrite(4, 180); //a
delay (90);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //pin
delay (90);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //guin’s
delay (90);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180);//got
delay (90);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //to
delay (180);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //do
delay (90);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //what
delay (90);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //a
delay (180);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //pin
delay (90);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //guin’s
delay (90);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //got
delay (90);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //to
delay (90);
arduino.analogWrite(4, 0);
delay (90);
arduino.analogWrite(4, 180); //do
delay (90);
arduino.analogWrite(4, 0);
}

void draw() {
}

Alicja’s Animatronics Assignment 1a: Animation

For this assignment I wanted to experiment with stop motion, since I had never done it before and it had always seemed really interesting. I took a long time thinking about and trying out different objects that I could animate, but I could not really come up with anything interesting, I guess my imagination is a little lacking in this compartment. Finally, after talking to my friend about seeing penguins last year, I decided to bring my stuffed animal, Pingu, to life. I have always loved the cartoon and have had the toy since I was about 5, so the idea of giving him a personality, even if only for a couple minutes, seemed almost natural.

Another thing I have always loved is sleeping, and I believe it is a passion Pingu and I share, so it became the subject of my short video. From there the process was pretty straightforward. I took pictures of the toy, slightly moving it or changing my camera position with each photo. After assembling the pictures in iMovie, however, I realized I did not have enough stills for the first part of my work, in which the animal is just enjoying his sleep. The solution, of course, was to take more and once I did that I started photoshopping eyes out of the pictures that were to belong to that section of the video. The next step was to add music: the sound of the alarm clock, and a fragment of the classical “Blue Danube Waltz” by Johann Strauss (I am not sure why this particular piece really seemed suited to this sequence for me – I think it must be a subconscious reference to a film I have seen a while ago but I can’t remember).

At this point, the work was almost done, but I did not like the pacing – I had set the duration of each frame to last 0,2s, but it felt like the first part should be slower, so I increased the time the stills in it took to change to 0,3s. Another tweak I eventually made was to add Pingu’s vocal reaction to the alarm going off, for which I used the sound from one of the cartoon’s episodes.

Here is the final result.

I think that if I were to re-do this video, I would try to use better lighting and a tripod for a more stable animation. I could also experiment with a physical way of closing Pingu’s eyes as oposed to photoshopping them out.

Alicja’s Animatronics Assignment 2a: Response to “Android Science: Toward a new cross-interdesciplinary framework” by Hiroshi Ishiguro

Hiroshi Ishiguro in “Android Science: Toward a new cross-interdesciplinary framework” explains that android science unites the disciplines of cognitive science and robotics in order to examine the interactions between humans and androids (1). It differs from robotics in that it attempts to make robots that have “an identical appearance to a human” (2) (as opposed to “robot-like robots” (2)).

In his essay, Ishiguro describes the process of making an android with great detail, starting with creating molds from an actual person, to choosing a skin material, to discussing different mechanisms that work best for making the figure move (air actuation being the quietest one, but also “requir[ing] a large and powerful air compressor” (2)), to teaching the robot how to move and installing “distributed vision systems and distributed audio systems” (3). The author also traces the different tests that can be performed once the android is finished and relays their results.

Perhaps the most surprising finding for me was that “if a human unconciously recognizes the android as a human he/she will deal with it as a social partner even if he/she consciously recognizes it as a robot” (5). I wonder if something similar occurs as it comes to our interactions with pets and children’s playing with toys. At the same time, I’m curious if it means that, in order to make the audience treat the robot  like a “social partner,” it is more important to engage the viewers in a deeply human activity (like a conversation) with the android than to make it look extremely realistic.

Alicja’s Animatronics Assignment 1b: The Role of Automata in the History of Technology

*I am sorry I am submitting this assignment so late – I didn’t realize we had to write a response on the first reading until now*

Silvio A. Bedini in his “The Role of Automata in the History of Technology” introduces the concept of automata, “the first complex machines produced by man […] by means of which he attempted to simulate nature and domesticate natural forces.” In other words, these creations did not originate from the desire to make something entirely artificial come to life, but rather were inspired by real life and, in particular, the inventors’ environments.

In the essay the author outlines the history of automata, tracing their beginnings all the way to Ancient Greeks’ writings (precisely, those of Ctesibius, Philon and Heron) and exploring the Renaissance-era creations, including fountains, grottoes, mechanical theaters, clock tower scenes, which featured “religious figures […] heralds, kings, warriors…,” and, finally, androids (“completely mechanical figure[s] which simulated […] living human[s] or animal[s], operating with apparently responsive action”). Amongst the different androids, Bedini discusses Jacques Vaucanson’s flute-players, Freidrich von Knauss’s “speaking heads” and mechanical writers, and Pierre Jacquet-Droz’s “the Writer, the Artist and the Musician.”

What I found fascinating about this reading is that it made me realize that inventors and artists have been attempting to make responsive mechanical human figures for much longer than I expected. I have always thought about robotics and the more modern attempts at creating intelligent androids as things unseparably aligned with future and science fiction films, not acknowledging that there was such a rich history behind these strives to create the perfect artificial human. This realization made me wonder why we, as the human race, find the idea of androids so appealing and so important that a lot of people over the years have dedicated their lives to creating these automata. In other words, what do these efforts say about how we see ourselves? How did these pursuits shape our identity? What drives us currently to continue this process?