Medusa Gaze final documentation _Ruixuan

  Human share a psychologically inseparable bond with their images and reflections. It is through facing this optical illusion resulting from specular reflection on an object surface that we learn and recognize what we look like and who we are. During the early stage of infants, we started to face the persons in the mirror and tried to reach out to and interact with whom we considered as existing and real as “The Others”, until the moment we realized that those hallucinations are actually reflected duplications of ourselves. Then as we grow up, our mirror images function more and more like a certification to our self-esteem and self-identification. We fix our sights and thoughts on how we look in the mirror or on the photos, and how others would define and judge our images as if they are actually confronting the real us. More or less, we are so bonded to our reflections, reflected by our reflections, and unconsciously defined by our reflections.

       As a research and experiment trying to break the bond between human and their reflected images, this project of interactive installation will be presented in the form of a video mirror which allows the audience to manipulate and transform their reflections by using their gaze. Wherever the audiences stare at themselves in the mirror, the deformation and transformation effects will prevent them from confronting their “normal” images and thus create a moment of disassociation and alienation between the object self and reflected self, and even cause a disorder of the self-recognition.

       Framed by the metaphor of the mirror, and developed from the mythological context of Medusa and her supernatural gaze power that transform the living objects into stone, this installation functions as an adapted version of Perseus’s enchanted shield. Through the shield, the gaze of Medusa is thrown back to her own body and bites herself with an inevitable punishment of petrification. Thus, people who stare at this mirror becomes Medusa and will trap themselves into the circles of an endless metamorphosis.

Click to view the presentation slide: PDF version and PPT version(view-after-download required)

Demo video:

Capstone user testing (Professor Michael’s session) _By Ruixuan

For the user testing, I prepared three different paper prototypes of the mirror to test out the users’ potential reactions and interactions with the mirror. And the three pre-designed perspectives are related to first, the attainability of the project with the users — whether they want to come to the project and how long they probably will stay; second, the users’ imagination and preference of the size and shape of the mirror; and third, their opinions of the technology of projection mapping and screens (either pixel screen or liquid screen).

A. Attainability:

All the users addressed that it is natural for them to come to the mirror and look at their reflections in the mirror. And two of them said they could stay in front of the mirror for a period of time to see what will happen (because they know I am building an interactive project so they are happy to wait for the interactions lolol). But one of them said if there aren’t any obvious changes for maybe 10-15 seconds, he will probably leave. So the time duration between the user’s first encounter and their first actual interaction is really important in keeping them with the project during the process.

B. Size preference:

Two users preferred a smaller piece of mirror, and one of them said: “it’s cool to have the feeling that you are observing secretly through a narrow area so that you have to come really close to the area and get more shock when something wired happens right in front of your face.” I think this is quite interesting and useful and I will keep it in my note when constructing the actual installation.

Another user didn’t reject a smaller mirror, but emphasized more on a larger piece of mirror with the normal geometrical shape because of her idea that “if it looks more like the real mirror we face to every day, such as the one hanging on the water sink in the restroom, the audience will get a clearer hint of what they should do with the thing.”

C. Projection or Screen:

Three of them all liked the idea of projection instead of using the liquid screens or pixel screens because the screens are common and boring while using camera and lights and projection may be more fresh and interesting. And also one user felt that the projection may create more spatial connections with the environment and atmosphere where my project is displayed.

Assignment 1: Motion Design Definition and Dynamic Grids Practice _by Ruixuan PU

Motion design is as its name implies, involving “motion”, “graphics”, and “design”, and comes up the pieces of digital footage or animation which create the illusion of motion or rotation and transformation.

First, Motion design is the animated graphic design. Motion design has to follow the principle of graphic design like the alignment, balance, contrast, proximity, repetition, and white space, etc.

Graphics are the components that move. Graphics can be referred to many visual elements, which take up the visual space or dimension and form the composition we see on the screen. They may include photographs, drawings, Line art, graphs, diagrams, typography, numbers, symbols, geometric designs, and maps, etc.

And for me, what makes motion design different from other types of animated digital footages is the continuity of the components and their motion flow, which can be referred to as the “rhythm” of the motion. It is the repetition of similar elements (either the components or the motion transitions) that creates the fluent connection between the last composition and the current composition and make the whole piece integrated.

Here is my “dynamic grid” practice. The motion is simple but implies the topic and rhythm of the ocean wave: Wave

Final Project: The Transhuman Ear _by Sunny Pu

Object: To create a pair of human ears that can transmit emotions by its movements.

Project Ideation here: Concept Essay

(*Due to the lack of time, I didn’t get to create the second ear. Therefore, so far it is just like Van Gogh – with only one ear. =v=)

Prototype and fabrication: I started with prototyping the mechanism that could make the ear rotate. At the very beginning, I placed two servos on a platform (a piece of cardboard) and let them move on two different surface dimensions. Because originally I imagined that the ears could move along two different directions, from up to down, and from back to front.

But soon I found out that the servos were not moving as what I expected. The two movements should happen based on the same pivot point, instead of two different center axises. 

Therefore, I needed to think about another more efficient structure to imitate the movements. We did a lot of research online and finally found this quite useful: Raspberry Pi Camera Pan tilt, a flexible mechanism that could rotate 360 degrees based on the center of rotation. This should theoretically solve my problem. So I started to make a similar structure out of the instruction.

The biggest issue during the assembling process was, I couldn’t find the suitable screws. I tried many materials and finally decided on the really thick metal wires. Thanks to Professor Rudy and Marcela’s great help, we cut the metal wires into pieces and tied them really tightly around the joint part so that the whole mechanism was stable. We also used glue (super glue and hot glue) to strengthen the connection.

3D printed the whole pieces…

Metal wire instead of screws…

Glue instead of screws…

Finish assembling!

The circuit

After we figured out the mechanism, I started to think about the visual presentation of ear movements and human interaction. Professor suggested that instead of using the processing slider to simply move the servos, I could use Leap Motion to control the ears with my hands’ movements, which can bring more playable interactions. ↓

 

(*Oops. The ball joint accidentally fell out… I fixed the problem later so it didn’t happen anymore…)

To make the project more complete, I decided to make a face for my ear, and also a structure to hold the face and the ear.

I “stole” a face model which was 3d printed by someone before, and I also got some magic clay… Ready to model the face…

Modeling….

Finish modeling!

Ready to paint the skin texture…

Finish Painting!

Then the last part is to put everything together! I want to show the audience the profile of the face, and the ear will be right in front of them, so that it’s easier for them to notice the ear.

The final profile look

Here is the final video: My friend Mate playing with the ear hahaha ↓

 

 

 

Assignment 6b: Concept essay _by Sunny Pu

People invented animatronics for entertainment, for imitating the life actions, and for exploring the scientific truth behind our material body. Along with the development of technology, people want to make the animatronics more and more like real living beings, and try to make their movements as accurately as possible. However, I think there are still many gaps and problems for us to fix. The biggest challenge area of the animatronics is to imitate the precise and unconscious micro-movements of live creatures. These micro-movements are so subtle and hard to catch and imitate, but they are so important in making us live and nature. Nowadays really advanced CGI can almost achieve the precisely realistic micro-movements as live creatures do. I have seen some examples. (Here is one: https://vimeo.com/80879503) And I think for animatronics if we want to extremely and precisely control a system of really heavy mechanism, there is still a long journey to go.

Going back to my final project, I am not going to make such a precise work of art, but I would like to experiment and create a tentacle mechanism which can be used to make ears move. My inspiration comes from the Disney animated characters such as Stitch and Dumbo who have a long pair of ears, and they use their ears to express their feelings. Most of the time when people talk about emotions, they always refer to the facial expressions and body movements, but few of them think about ears. Because usually they treat ears only as a medium of the audition, and mostly our ears can do not have the ability to move. (Some people can move their ears a little little bit by consciously controlling the ear muscles.) In this sense, I hope to experiment and create a pair of human-like ears that can move organically and softly like those animated characters, and use the ears to express the emotions.

I hope the mechanism can imitate the situation of happy, sad and relaxed. For example, when the person (or the subject who own the ears) feels happy and excited, the ear will move quickly up and down, little a little fan; when he feels sad and upset, the ears will point down, like what puppies often do; and when he feels relaxed and comfortable, the ears will slowly move like tentacles, free and soft.These emotions are common and universal, so it’s easy for people to recognize and distinguish them.

There are two pieces of artistic works that inspire me and help me construct the approach to achieve the result. The first one is this Necomimi Brainwave Cat Ears. By testing the person’s brainwave generated differently according to different emotions,  the person who wears the ears can move the ears with his emotions. And the other one is the tail mechanism of Zathura’s Zorgon. It’s a pretty advanced animatronics based on the tentacle mechanism, and the bionic structure works really well and it actually moves like a real dinosaur tail. I would say the idea of the brainwave cat ears are quite similar to mine because we both want the ear to become a medium of reflecting and expressing emotions. But what I concern more is to imagine how human ears would actually have the ability to express emotions. How would the ears react, and how would they move. It’s not just a pair of fluffy animal ears which extend my expression. They are our own ears. And I want my project to drive people to further ask the question: Do human necessarily need ears to express their emotions? If in the future, human ears have the ability to express their emotions, is that an evolution or degeneration?

Assignment 6a: The environment and the way you tell it _by Sunny Pu

As an update of my final project idea, I want to make a pair of human ears which can transmit emotions by its movements. Inspired by how animals (like dogs, cats, and rabbits, etc) use their ears as an important medium to express their feelings, I hope to imagine and give human ears the same ability.

If I put my creation in a permanent exhibit, the exhibit could be about “Animal emotion and its expressions”, and the models of different animal ears would be those main displays, in the “Ear zone”. (The other zone could be about eyes, tails, mouth, etc.)

I think the environment will influence the audience. The exhibit topic and the displays of animal ears – will push the audience to link the human ears with concepts like “emotions”, “expressions”, “animal features”, which normally people won’t think about. (Usually, people treat ears as only a medium of the audition.) They might even consider the further questions: Do human necessarily need ears to express their emotions? If in the future, human ears have the ability to express their emotions, is that an evolution or degeneration?

HW3: Group project sound map _by Shi Zeng, Ruihan Yang, Sunny Pu

  1. Water: The water flows at the center of the sewer. As long as the player is walking on the aisle (beside the water), he will always hear the sound of the water.
  2. Wind: The player can always hear the wind. But wind sound is lower that the water sound.
  3. The person/player’s footsteps: The sewer is humid and with water. The player’s footsteps indicate that he steps into puddles. The play will hear his footsteps when he moves, and when he stops his footstep sound will also stop.

4. Zombie approaching: Heavy steps with echo. The speed can be slower, or randomized later to create a scary feeling of unknown creature movement.

5. Zombie roar: Play when Zombie approaches. (Loop every a few seconds?) The closer the zombie approaches the player, the louder the roaring is.

6. Gun reload and gunshot: Play when the player shoots.

7. The flashlight switch: Play when the player turn on / off the flashlight.

8. Trigger (?):

Assignment 4b: Final Project _by Sunny Pu

Before the class started, I was already interested in the area of facial expressions of the complicated animatronics in the filmmaking industry. I went to an exhibition of Weta Studio and saw their high-tech facial animatronics. The time when these giant structures moved as precisely as human and animal do, was really magical to me. I think the face is the part that transmits most of our emotions. Sometimes even if the mechanism can really move organically, I still feel it’s just a structure. Only with precise facial expressions the object is alive, not just moving.

I think nowadays really advanced CGI can achieve the precisely realistic facial animations as the animatronics does. I have seen some examples. (I will find the video link later)

In this sense, for my final project, the idea is quite simple and clear. I want to make a face animatronics and give it different expressions. Stitch is the reference character that I go for. Stitch has really impressive emotional expressions on its face and ears.

 

 

 

 

 

 

 

 

I will try to make its appearance like Stitch. As described in the Disney animation, Stitch is a cute alien who tries to get used to the life on the Earth. It cannot talk at the beginning, so his face (and body gesture, though we don’t give him body here) is all the thing that express its emotions. 

I want to make 5 facial expressions: Asleep. Awake. Smile. Sad. Wink / Blink eyes. The movement of the cartoon face can be more exaggerated than the realistic face, and it can express the more clear emotions. Later on, I need to closely look at these three parts that are expression primitives: 

  • The eyes – eyebrows, eyelids, eyebrows
  • The mouth
  • The ear

I found this animatronic online which looks quite impressive: https://www.youtube.com/watch?v=92OUb4uLUhk

 

Assignment 4a: Response to the “Cellular Squirrel” _by Sunny Pu

  • What was the researchers’ main objective?  What considerations fueled that objective?

The main objective is to create an interactive call handling agent that is able to alert both the user and co-located third parties of incoming phone calls with subtle and public non-verbal cues, instead of the conventional phone ringing or vibration.

The consideration taken into account is that the interruption of the telephone can be quite annoying to our productivity at work and social and familial availability. Thus, to make the interaction enjoyable, and feel empowered and competent, the human-machine interface should be based on the same social interaction paradigms as human use.

 

  • What motivated the creators to use an animal figure rather than any other living creature or being?

The stuffed animal figure helps the users to avoid the uncanny valley, and it drives people’s curiosity and affinity, and leaves them high emotional impact since the animal figure often evoke stories about people’s experiences with their pets.

 

  • Identify the methodologies the project’s creators chose to accomplish their goal. Then clarify the following:
    • How did this/these choice/s serve their purposes?

The methodology that the creators chose is a term called “embodiment”. It is a structural coupling between system and agent, which creates a potential for ‘mutual perturbation’. And the embodiment is realized on two levels. First, the degrees of freedom of the animatronics allow the system to perturb its environment via physical movements. Second, the dual conversational capability that enables the system to engage in spoken interactions with both user and caller embodies the agent in the conversational domain.

 

  • Based on the experience and conclusions of this study, what would you propose as a next investigative step forward in this conceptual area, and why?

I think overall this interactive call handling agent is friendly, gentle and relaxing according to its cute appearance as well as its human-like and non-verbal movement, however, its behavior and interaction is not efficient enough to drive the user’s attention at the first point. Like mentioned in the paper, during the de-briefing about half of the participants didn’t notice the squirrel waking up. And imagine that since the user holds the animatronics, it is easier for people who surround the user to see the agent than the user himself, so that maybe these people become much more easily and earlier interrupted by the movement of the agent.

Thus, I think the next step is to enhance the interaction with the user, and let agent only alerts the user, while interrupts the others as less as possible.

 

  • Do you agree that the idea of introducing “intelligent agents” could help us communicate better? Why?

I think this kind of “intelligent agents” does help to improve our communication, in the sense that it can intelligently deal with the incoming information in a pre-designed progressing system, which can filter many unnecessary steps to respond to the information. And with the help of the agent’s intervention, people become more flexible in other things that they want to focus on.