AE animation mid-term project

I am really sorry for breaking the equipment rule.

So here is the story of my first AE animation.

I took com lab last semester but I did a stop-motion animation for my final project. So this is my first AE animation, which is really challenging for me. I am a story-oriented person, so for this project, I come up with the idea of life and time. I want to deliver the idea that even a person have a limited time of living, he/she need to be brave and enjoy every minute in this beautiful world.

Consequently, my animation is a story of a water puppet whose life is only 24 hours: he is born in the water and finally become a droplet and go back to the river. I shoot a video first, which is the basement for the story. It takes me three days to complete this video piece but it is really worthwhile because I get exposed to the natural world and I get the chance to visit some places that I have never been to.

Then I start sketching my puppet in Photoshop and import it into AE. The animation part is the most challenging one. I need to consider every movement of the puppet and make sure every movement of the puppet times well. For the first half part of the video, I animate the puppet without using the puppet tool, and because of this, the process get pretty slow. For the second part of the animation, I use puppet tool, which fastens my working speed quite a bit. However, I find puppet tool’s effect is not as good as what I have done before because my puppet is not designed like a real person that has every joint. So when I use the puppet tool, the puppet move weirdly. And because of this, my animation looks really weird but interesting.

I am not satisfied with my first attempt in AE but I find it really excited to see my work being rendered out and be available to watch. The process is, indeed, really hard but I enjoy making this piece.

So here is the link to my mid-term AE animation:

The Dark Room – Group Project

Final Design Doc:

Concept: The user is immersed in darkness and must use their hearing to traverse the darkscape. Our intention is to have the user feel creeped out and uncomfortable, as if they are unwelcome in a strange place that exists without them.

Rules: The user begins in the center of the room. The user must walk around the VR experience with the HTC Vive controllers, triggering instances as they move. They must rely most often on the audio, but also must approach objects and lights to progress forward.


-Audio: voices, light switch, knife, ticking, breathing, train whistle and movement, Audio spatializer

-Visual: 3D models, modified lighting script, table texture

-Technical: computer capable of running VR, HTC Vive headset and controllers, bluetooth headphones, small room

The Process:

creepy mannequins!

creepy mannequins!

Our game basically started with the idea that the user would follow sounds around the stage and that sets of items would appear with a certain theme. We moved from the initial idea of a department store in the dark to a sort of nightmarish, voluminous room that could be both open and claustrophobic at the same time. Most of the work involved finding decent 3D models that worked together to convey the mood of a specific set, and then placing those models in specific areas that would have contained trigger zones. The trigger script used with each set, although it underwent a bit of a change into being array-based, was fairly simple. It basically just said, when the player collides with this trigger, remove the trigger and the current set and instantiate the next set of objects and triggers.

I really enjoyed doing a lot of the stage design and creating a sequence for the player. However, I think I would like to tighten up the mood and theme a little more as far as style of the 3D models go. Some are at odds with others and it seems fairly obvious that we just sourced what we could find. If I could actually get good at 3D modeling, it could enhance the unity of the project.

The audio, while I think could be improved in quality, is what makes the experience. It was great learning to work with Audacity and seeing how powerful a simple, free program can be. For instance, all the mannequin voices are my voice but sound radically different. The one issue we ran into during playtesting with the HTC Vive was that the audio sources conflicted with each other in such a small space. For those playtesters who didn’t have especially keen ears, it was hard to track sound locations. I think this is just something that requires a lot of testing and tweaking to get the audio source ranges to the proper distance.

Lastly, I have to comment on the VR experience in general. It was my first VR trial and I was amazed at the quality of the visuals and audio. You are sucked into the world and really do lose your sense of space. Now I know that the particle effects look absolutely stunning VR (the train smoke, just wow!), I hope to try out some more in the future. This project has shaped my interest in games and 3D worlds a lot. Now that I had a shot working with the Vive, I don’t want to move back to non-VR experiences and game worlds. Wearing the Vive v. watching a screen in a chair are two completely different things, and I already have a preference for the former and look forward to how people express its capabilities in the future.

Lab 4

For this lab, my partner and I decided to use the keyboard directional keys to toggle shapes and colors. In the lecture prior to the lab, we learned how to use Processing to change colors and shapes in sketches, so we figured, why not combine this with keyboard interaction!

The functionality of the below code is as follows: you click the keyboard on the canvas to draw a shape. IF you press the left and right keys, the color toggles. The up and down key toggles the shape itself. The shape options are: +, circle, triangle, and square.

int theShape = 0;
int theColor = 0;
color[] colors = {
color(255,0,0,100), color(255,255,0,100), color(0,255,0,100), color(0,0,255,100) };

void setup (){
size (400,600);
background (255);
smooth ();

void draw (){
if (mousePressed) {
fill( colors[theColor] );
switch( theShape ){
case 0:
case 1:
rect(mouseX-30, mouseY-30,60,60);
case 2:
vertex(mouseX+50, mouseY+50);
vertex(pmouseX-20, pmouseX-20);
vertex(mouseX-20, mouseY-20);
case 3:
rect(mouseX, mouseY-20,20,60);
rect(mouseX-20, mouseY,60,20);

void keyPressed () {
if( key==CODED ){
if( keyCode == UP ){
if( keyCode == DOWN ){
if( keyCode == LEFT ){
if( keyCode == RIGHT ){
if( key == DELETE ){


This was a fun exercise and inspired me to use different types of simple interaction to engage a user. Such an interaction can be used to enhance a game in which you want there to be different options for a player, or options for some sort of shape or image on a screen.

Screen Shot 2016-05-21 at 3.09.20 PM

Interaction Lab Final Essay

Justin Amoafo

Antonius Section

Interaction Lab

May, 2016

What is Interaction?

From my perspective, Interaction is an active form of involvement. This is a very general definition, however in light of our course and this semester’s teachings, Interaction has taken a new meaning for me. Interaction is innovation. It is making the most ‘unnatural’ moments or products a part of our daily lives. As technology has been evolving since the millennium, interaction has taken several forms, both physically and mentally. Now we have mobile devices that we are glued to, which we use to interact with the world. Technology has given us the illusion that we are having human experience, when in fact, they are all fabricated concepts that don’t necessarily exist. Good Interaction helps to create such experiences that feel real, almost like a second skin.

Final Project Proposal

My vision for this project is to create an electronic book that felt as far from electronic as possible. The benefit of electronic books is that you don’t need to buy a million physical products in order to have a million books; you just need one. The issue is, however, that many electronic books don’t feel like books at all. A kindle feels like a black and white iPad with text. The experience isn’t book-like at all– you swipe to turn pages and press a single button to enter a menu with all your books. As a photographer, I share lots of content on the internet. It reaches millions of people, and is almost always consumed via a screen. I want to make consuming content feel a bit more interactive and natural, and can definitely do so with my interactive book. My initial prototype will be made of acrylic and cloth, with a projection of images underneath, and will be improved upon as time goes by.

Critique on an established form of interaction

I believe that traditional storytelling can be improved upon greatly. Many people might not see it as a form of interaction, but in my humble opinion… telling a story or sharing an experience is one of the highest forms of interaction. You’re putting someone in your shoes and trying to replicate a feeling, action, or experience. Although we have films, books, video games, etc, I believe that Virtual Reality is a huge step in the right direction so far as improving storytelling goes. It is literally putting the user in someone else’s shoes and giving them the freedom to walk around. As VR continues to develop, I do hope that it is used as a form of literal storytelling so that one day, instead of listening to audiobooks, people can receive stories and content through first person exploration, rather than first person narration or consumption.

Throughout my experience with Interactive Media Arts at NYU Shanghai, I have taken a whole new perspective on life. I know it sounds cliche, but it is true. Interaction Lab was one of the most difficult courses for me. Although I’ve taken a digital electronics class on circuitry, seven segment displays and the like, this is the first program I’ve gone through that goes beyond just scratching the surface. I’ve been challenged to dig deep into my creativity and intellectuality & come up with ideas that have real life, practical applications. In Interaction Lab specifically, I’ve struggled with making my visions come to life. This is due in part to my heavy workload and lack of technical skills. However, I have full faith that with the skills that have been instilled in me, and the resources I have at my disposal, no vision I have is impossible. Thank you for this amazing experience.

Lab 9


  • Work with images and audio in Processing and document it on the documentation blog.
  • Think about how you can incorporate interactivity and / or computation into this week’s exercise.

For this lab, I was excited to incorporate my own personal music taste into my processing sketch. As a huge fan of Afrobeats, I intended to make an iTunes-like effect that uses the backbeat to change the color and orientation of rectangles and ellipses. I started out with the original Frequency Energy Beat Detection from the minim library and made some tweaks to make it iTunes-esque.

Processing Code:

import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
import ddf.minim.signals.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;

* This sketch demonstrates how to use the BeatDetect object in FREQ_ENERGY mode.<br />
* You can use <code>isKick</code>, <code>isSnare</code>, </code>isHat</code>, <code>isRange</code>,
* and <code>isOnset(int)</code> to track whatever kind of beats you are looking to track, they will report
* true or false based on the state of the analysis. To “tick” the analysis you must call <code>detect</code>
* with successive buffers of audio. You can do this inside of <code>draw</code>, but you are likely to miss some
* audio buffers if you do this. The sketch implements an <code>AudioListener</code> called <code>BeatListener</code>
* so that it can call <code>detect</code> on every buffer of audio processed by the system without repeating a buffer
* or missing one.
* <p>
* This sketch plays an entire song so it may be a little slow to load.
* <p>
* For more information about Minim and additional features,
* visit

import ddf.minim.*;
import ddf.minim.analysis.*;

Minim minim;
AudioPlayer song;
BeatDetect beat;
BeatListener bl;

float kickSize, snareSize, hatSize;

class BeatListener implements AudioListener
private BeatDetect beat;
private AudioPlayer source;

BeatListener(BeatDetect beat, AudioPlayer source)
this.source = source;
this.beat = beat;

void samples(float[] samps)

void samples(float[] sampsL, float[] sampsR)

void setup()
size(1024, 1024, P3D);

minim = new Minim(this);

song = minim.loadFile(“ole.mp3”, 1024);;
// a beat detection object that is FREQ_ENERGY mode that
// expects buffers the length of song’s buffer size
// and samples captured at songs’s sample rate
beat = new BeatDetect(song.bufferSize(), song.sampleRate());
// set the sensitivity to 300 milliseconds
// After a beat has been detected, the algorithm will wait for 300 milliseconds
// before allowing another beat to be reported. You can use this to dampen the
// algorithm if it is giving too many false-positives. The default value is 10,
// which is essentially no damping. If you try to set the sensitivity to a negative value,
// an error will be reported and it will be set to 10 instead.
// note that what sensitivity you choose will depend a lot on what kind of audio
// you are analyzing. in this example, we use the same BeatDetect object for
// detecting kick, snare, and hat, but that this sensitivity is not especially great
// for detecting snare reliably (though it’s also possible that the range of frequencies
// used by the isSnare method are not appropriate for the song).
kickSize = snareSize = hatSize = 16;
// make a new beat listener, so that we won’t miss any buffers for the analysis
bl = new BeatListener(beat, song);
textFont(createFont(“Helvetica”, 16));

void draw()

// draw a green rectangle for every detect band
// that had an onset this frame
float rectW = width / beat.detectSize();
for(int i = 0; i < beat.detectSize(); ++i)
// test one frequency band for an onset
if ( beat.isOnset(i) )
ellipse( i*rectW, 20, rectW, height);

// draw an orange rectangle over the bands in
// the range we are querying
int lowBand = 5;
int highBand = 15;
// at least this many bands must have an onset
// for isRange to return true
int numberOfOnsetsThreshold = 4;
if ( beat.isRange(lowBand, highBand, numberOfOnsetsThreshold) )
rect(rectW*lowBand, 0, (highBand-lowBand)*rectW, height);

if ( beat.isKick() ) kickSize = 32;
if ( beat.isSnare() ) snareSize = 32;
if ( beat.isHat() ) hatSize = 32;


kickSize = constrain(kickSize * 0.95, 20, 32);
snareSize = constrain(snareSize * 0.95, 30, 32);
hatSize = constrain(hatSize * 0.95, 5, 32);

Lab 7: Digifab


Design and Laser Flat Objects + Design 3D Object using Tinkercad


Laser Cutter



For the laser cutting etch, I decided to do an iteration of a logo I made for one of my clients, Gerard Adams. He has a podcast called Leaders Create Leaders, so the etch I made could double as a pin of sorts.

Here’s a screenshot of the Illustrator file. Antonius suggested that I play around more with the shapes next time.

I lost the image of the final creation in a previous documentation draft, but you can get an idea based on the screenshot.

Screen Shot 2016-05-21 at 1.55.20 PM

For the TinkerCad ring, I collaborated with Brian Ho. He knew his ring size, size 9 so I used the ring template and resized it according to his measurements.  The design idea was Brian’s, he was somehow inspired to make a ‘chicken ring’. We began with an egg base (lol), an egg shape for the body and a smaller egg for the head. The rest of the anatomy was made of differetn shapes (cones, triangles, spheres, etc).

I didn’t really get the hang of TinkerCad after the first lab with the laser cutting, so I had to adjust a bit to the learning curve so far as how to resize the ring. Another weird point for me was the perspectives on TinkerCad. I’m used to working on a flat plane, or in different windows to signify depth, so having the ‘scroll to rotate around the object’ took a bit of time to adjust to. I ended up remaking the base ring template 5 or 6 times because I wasn’t sure which plane to put it on, but in the end realized that it didn’t really matter. Below are screenshots of a few perspectives from our creation.

Screen-Shot-2016-05-14-at-4.40.14-PM1 Screen-Shot-2016-05-14-at-4.39.55-PM1 Screen-Shot-2016-05-14-at-4.40.05-PM1 Screen-Shot-2016-05-14-at-4.39.44-PM1


Individual Game – Lake Mutanto


My game, Lake Mutanto, is about camping out near a lake and, quite unsurprisingly, the radioactive lake spawns weird mutant traits in rowdy teens that were hanging out by it getting high and drinking beer. For my game, I designed three mutant teens in Fuse, which was super fun and easy to use. However, Fuse will not open any longer and won’t re-download on NYUSH wi-fi, so I have only two of them in-game because I can’t access the third Fuse file to export an FBX. Here are my mutants:

mutantteens This project went pretty smoothly until around the gameState bit. At some point, my mutants’ animations went haywire. They look very wacky and also slide towards the player when they are supposed to be standing still. This issue is the most irritating, because I’m not sure where it went wrong, and I tried changing the acceleration like in the guides and then also researching other methods, and nothing has worked. It was definitely cool when the mutants would stop at the perimeter of the glow from the player’s light.

I think that, at this stage, the game needs to be debugged and polished, with perhaps another mechanic to add to the fun of it. Currently, the player can easily outmaneuver the very basic AI of the mutants, even if the player is not faster than them. Also, I wish I had more time to add audio at the beginning of the player noticing how strange everything looks and calling their mom to come pick them up. That way, there’s a reason for the big black van to come barreling towards the fence at the end.

vanGoals for Future Changes:

-very smooth transitions between mutant animations

-a better animation with sound for the van coming to save the player (might knock down the fence instead of moving through it like a ghost)

-an intro of some sort, either text that indicates the player character’s thoughts about Lake Mutanto or audio indicating they want to get the hell out

-an improved health system; currently, just touching the mutants lowers health; I would rather the player lost health when the mutant did its kicking or swiping animation

-some sparse objects/sights of interest in the dense forest area; I want to naturally guide the player to the fence, but at this point it’s easy to get turned around and a little lost

-a more natural reason for the handlight to flicker on and off; I also tried lerping the light on and off but it wouldn’t work with the time restrictions; I would like to tweak that too and get it to a little flicker then it turns off—I was thinking of using a battery system where you can only keep it on for so long before it goes out forever

-a menu and proper start and win screens

Final Assignment: Ruined Temple Level of Group Project

At the beginning of the class, my role in the project had much to do with creating the thematic aspects of the game. We had plenty of ideas that we wanted to implement, but it was equally important for us to select and choose what was feasible within our skill levels and time-limit.

From then on, all of our team member’s roles simply diverged to working on our own individual environments. I got to work designing the interior and exterior of my temple, Kadallah went to work creating his awesome dystopian waste-land city, Mate began sculpting his desert, and Jeffrey as well.


The one aspect of our project’s creation that I enjoyed the most was designing the architecture of the temple building. In my first Creating Immersive Worlds assignment, I mentioned how much I loved looking at the concept art work of the environments. Implementing all of the small details into the temple’s exterior and interior (For example, the broken floor tiles that jut out from the rest of the floor), made me feel like I was designing with the same mindsets as my favorite creators.

The most challenging aspect for me was the technical implementation of the game features into my environment. I had a blast designing and modeling my building, but giving the player free movement within-it proved to be much more challenging than I thought! The ruined floor tiles of the temples were made to give off a rustic aesthetic, but they actually began impeding the user’s movement if I put down to many! Figuring out the scripts for more customized torches was also quite difficult for me.


Creating Immersive Worlds HW#4

Project name: Zombies around the Temple

Project Response Feeling: A feeling of cautious curiosity. There is much to explore in this world… but you’re not alone out there.

For my play testing sessions, I selected two of my friends: one an avid video game player and one who wasn’t. I felt that by choosing people with varying experiences with video games, they could reveal more to me in what my game experience was doing correctly and what it was lacking. In the end, I think it was a good choice. The person experienced with video games was able to note some very technical aspects that brought him out of immersion, and the person who was not experienced with video games was able to speak more to the vibes of the experience rather than the technical issues.

Subject 1:
Creativity Score:10
Emotional Response:The designs of the sculptures were certainly eye-catching. However, the way they were placed didn’t seem like it had any rhyme or reason to it. It just seemed like they were strewn about the area. I was trying to find some meaning in their placement (like a map of sorts), but I don’t think there was any to begin with.
Engagement Score: 27 seconds

Subject 2:
Creativity Score: 7
Emotional Response: The design of the zombie is a little too cute for how the environment looks! Maybe you should add a spookier one because, well, if your aim was to make me feel anxious you weren’t doing that good of a job.
Engagement Score: 5 seconds

Playtest Notes:
Thomas Wong: The speed is a little too slow my taste right now. You should up it a bit, or give the option to run in the game. There’s simply too much space to cover with the speed that I’m walking at.

Anthony Ho: The trees are blocking me from having any good view at the entirety of the area. Maybe try clearing out a patch of trees so players can effectively find where they’re supposed to find the escape car?

“Creating Immersive World Final Group Project: Escape the School” by Mercy Angela K Nantongo, Kalkidan Fekadu Eteffa, Katherine Thoma-Hilliard and Zhuoling Shi

Here is our final design document:

I was the game designer of the group project, which means I’m in charge of designing set ups and rules. I composed design documents, designed the set up of the building, took plenty of skybox pictures, searched for all kinds of assets in asset store or online, and creating original textures and materials which I did not do in my individual project. I also made some 3D models to serve as assets in the game view. I liked the job of designing the roles because I have the passion of making fun or funny games and I would think what I would like to have in a certain game. Making the arrangement of rooms and combining multiple rooms with multiple functions in one areas is what I am good at. I guess because of my imagination, I can put things together in a creative way. These creative arrangements could also make people feel real instead of non-realistic. I think this is very important for creating immersive experience that the design should not be predictable, but it has to be realistic.

I did not really do a good job on making 3D models. Though I enjoy the process of making them and I love spending time to make the objects look more real. However, I was not a person who is good at using softwares. After I made those models, they could not be imported into unity. I tried to use blender to change the format, but I really had no idea on how to use blender. Which was really frustrating. Searching for assets and beautifying materials was pretty fun and I loved spending time on making my own materials.