ZZ’s Capstone Proposal

a short statement about what I am going to make 

What I am proposing right now is absolutely different from my capstone brief. The current idea I have is to apply photogram technique to motion tracking field in order to visualize the the accumulative shape/contour of human movement in an analog way.

Photogram is a photographic image made with light-sensitive materials. Objects are placed either near or on top of the material and then expose it to light. A negative shadow image will show variations in tone that depends on the transparency of the the object. It doesn’t involve camera.

The idea I have in mind is to combine multiple enlarging photos so they can be bigger than table top size and can be placed vertically on a vertical surface. The project will be a performative piece in a dark room with red lights on where each participant will be asked to hold on to a strong-ish light source(bigger than LED lights). They can place the light source anywhere or hold on to it the entire time. With their movement, the area getting exposed to light on the photographic paper should be constantly changing. After a certain period of time, I will start “printing” the photo by spraying (probably a lot of) paper developer, fix, stop and tap water. Then I will use a squeegee to drain the photographic paper and wait till it drys up completely.

I would also like to make this project more performative by inviting dancers to perform a dance piece that will probably emphasizes on interaction with the light source/object and the big photographic paper.

This project to motion tracking with a webcam is similar as analog photograph to digital photography – they both are recording, storing, and showing data in an analog approach.

a list of existing projects relevant to my work

  • https://vimeo.com/19006151
  • http://artweek.la/issue/september-22-2014/article/klea-mckenna-no-light-unbroken
  • http://www.wkozak.com/paulkozak/photography_files/art_photo_files/photogram_photo.htm
  • http://coincidences.typepad.com/still_images_and_moving_o/2004/08/httpwwwphotogra.html
  • http://www.vonlintel.com/Floris-Neususs.html
  • http://www.madeinbc.org/mibcshowcase/dance-and-photography/
  • http://coincidences.typepad.com/still_images_and_moving_o/2004/08/httpwwwphotogra.html
  • http://www.joannaham.com/projects/nike/

a short statement about what I am going to write

There’s an increasing number of artists who start adopting new media as an approach to enhance their live performance on stage. Along with the evolution of technology, the new media on stage have been consistently updating the overall show experience with more and more astonishing effects brought by projection mapping, holographic projection, virtual reality, motion capture, interactive musical instruments with non-traditional medium, etc. Live performance has certainly gone beyond traditional live band, LED screens, and clichéd props.

Thanks to open-source and low cost materials, even independent theaters or artists with a fairly small budget compared to big productions are able to bring in new media to create fascinating performance. In this research paper, I would like to narrate the evolution of the new media that have been adopted by presenting analysis of technological and artistic contexts of different forms of new media in these performances. The list of performances will probably include Coldplay, Yoga Lin, Perfume, Elevenplay(a Japanese dance troupe noted for their incorporation of advanced technologies into their works), Alfred ve dvoře (an independent theater in Prague, featuring avant-garde new media theater performances), 3-legged Dog (a production studio in New York creating experimental artworks) and more.

a list of texts relevant to my work

  • Digital Performance by Steven Dixon: https://mitpress.mit.edu/books/digital-performance (1)
  • https://www.theguardian.com/stage/theatreblog/2010/mar/23/stage-theatre-digital-technology-ished (0.5)
  • http://www.bbc.com/news/technology-17079364 (0.5)
  • http://www.music.umich.edu/muse/2013/fall/Technology-Takes-the-Stage.html (0.5)
  • http://www.huffingtonpost.com/2012/07/03/interactive-theater_n_1643115.html (0.5)
  • Documentary about Perfume: http://www.bilibili.com/video/av1386047/ (0.25)
  • Japanese news documentary about Perfume’s technology: https://www.youtube.com/watch?v=14ZXLbNZnVc (0.25)
  • Elevenplay on America’s Got Talent: https://www.youtube.com/watch?v=1CjX1r-gh8U (0.25)
  • Perfume and Elevenplay tech team: https://research.rhizomatiks.com (0.25)
  • Drone ballet: https://vimeo.com/163266757 (0.25)
  • Coldplay live with a laser harp: https://www.youtube.com/watch?v=EkMxw2tWlpc (0.25)
  • Coldplay’s Xylobands Intelligent LED Wristbands: https://www.youtube.com/watch?v=iUZtSVhTCTo (0.25)
  • Coldplay “A Sky Full of Stars” promotion video: http://jamesmedcraft.com/view/coldplay (0.25)
  • Beyoncé ‘Run the World (Girls)’ at the 2011 Billboard Music Awards: https://www.youtube.com/watch?v=5EwZ_AzDDM4 (0.25)
  • KYGO VR at the 2015 Nobel Peace Prize Concert: https://www.youtube.com/watch?v=ciXIDnNKhwo (0.25)
  • Lady Gaga Tribute to David Bowie at Grammy 2016: https://www.youtube.com/watch?v=eK2sQazh9QY (0.25)
  • Artisan Nike Rise 2.0: http://artisan.co.uk/work/nike-rise-2-0 (0.25)
  • Light Percussion: https://vimeo.com/35621950 (0.25)
  • Seventh Sense (Excerpt) / 第七感官 (五分鐘版): https://www.youtube.com/watch?v=iQlDEPLHPyQ&t=224s (0.25)
  • 林宥嘉Yoga Lin – 成全 (2017江蘇衛視跨年演唱會): https://www.youtube.com/watch?v=YguFk47S45s (0.25)

Final Project By Kefan Xu


What I decided to make for this final project is a fan with a container. The propose of designing a fan like this is to make it able to be put anywhere on a desk, and the direction of the fan is always upwards, so when some one is too tired to work, he can simply put this fan below his face and enjoys the fresh air. I tried to make this fan user friendly. To achieve this, I designed the  internal structure of the container to be a hole with a stage in it, so the fan can be attached on it. With this container, one won’t worry about be hurt with the fan, because most of its parts are in that hole. The container was designed to be able to add perfume on its bottom, so the fragrance of the perfume can be brought out of the container by the fan.rhino-design2

This is the first version of the container in the rhino. I created this complex shape by first draw a top-like shape and then attached it onto a bigger top shape using ptPanel3DCustom function, and then integrate them together. Then I drew three ring-shape circle to attach them on the top, middle and the bottom of the original shape to make the stage that I have mentioned. However, I met several problems when I tried to 3D print this shape. First is that the shell-like parts on the top, as you can see, are too thin to be printed. And the support parts made by the machine itself during the printing process will be added into the hole, which makes it impossible to put the fan it it. In order to solve all this problems, I made my second version.


As you can see, in this version, I simplified the shape of the container and gave up using ptPanel3DCustom to build the shape, which might make it too complex to be printed out. I added few smaller holes around the big one, so user can add perfume in it based on their preference. They can even add different types of perfumes in different holes to mix the fragrance. And here is what I printed out.

img_3343 img_3342

On the next step, things become much easier, I designed the shape of the fan in the Ai and printed it out by using the laser cutting machine, then I simply attached it on the motor. The motor part is manipulated by the Arduino. Once plug it on, it will work.


It works pretty will, but the wind it produce is a little weak, I might change it to be stronger by reducing some resistances on the circuit. Another problem is that when I tested it, the connection between the fan and the motor seemed not that strong as I excepted, may it’s because the hole I designed on the fan is a little bit bigger to fit the motor, I might redesign a fan with a smaller hole to solve this problem. Over all, I am quite satisfied with this project, I have really learn a lot during the process of making it and it gave me a vivid overview of how to build a product from a design to a real one and how to make real product from a prototype.

[IX Lab] Ring Making/ Laser Cutting

This lab was very fun to do since I got to see a laser cutter in action, something I had never seen before and thought was one of the coolest things ever. I enjoyed creating the pattern in illustrator as I haven’t used it in a few years and it was nice to get back to it. I created a hexagonal pattern that I thought looked pretty cool.

I also really enjoyed working with TinkerCAD. My only other experience with a CAD was Google Sketch, and it was a few years ago, so I found it very interesting and refreshing. I created a ring with spikes placed around it at equal spacings.

[IX Lab] Face Tracking Lab

Partner: Ben Tablada

The purpose of this lab was to use the OpenCV Library to track an image onto cam footage. Ben and I decided to use face tracking so we copied the code from the website.

We used the posX and posY variables, which correspond to the X and Y position of the center of the rectangle created by the face tracking software, to overlay the image over the faces being tracked. When we had completed that, Antonius gave us the task of making it work with multiple faces, which proved to be a harder task. We discovered that we could turn the faces variable into an array and had it track the image onto all indexes of the array so that we could have the image track onto every face that was tracked no matter how many there were.

I really liked this lab since it started to form my ideas about what i wanted to do for my final project, and i think face tracking is one of the coolest things ever.

[IX Lab] Final Project

Instructor: Antonius

Partner: De’yon Smith


The purpose of this project was to create an app that would allow the user to draw an image using a pallet and a wand with numerous colors on it using color tracking. After completely drawing the image they desire, that image will be placed onto any face that the camera detects using face tracking. This app is supposed to be a way for the user to have fun drawing their facial reaction or creating funny images that would then be on the face of whoever is in front of the camera.

We had originally wanted to create a game, but we couldn’t come up with a concept that we felt strongly about, so we decided to ditch that and move on to an entirely different experience. When we originally created our painting idea, we hadn’t planned to create a 2-stage system, we had wanted to just paint and have it map onto the user’s face in real-time. This, unfortunately,  was unfeasible and we therefore had to change it to a 2-stage system, one for painting and one for tracking. This not only made the coding easier, but it also substantially cut down on processing power, as serial takes up a lot of power.

We wanted to make an authentic painting experience for the user so we had intended to create a paint palette and have the buttons look like paint splotches (that’s a technical term). But due to material restrictions, we had to use found materials instead. We used a piece of cardboard with holes in it to serve as the paint palette, and we cut pieces of felt to cover the buttons that we connected to the Arduino with soldered connections. This was both one of the most annoying and most fun parts of the project, learning to solder and making all the soldered connections. They kept coming apart when we were moving the project or trying to work on one part or another. But then that meant that I got to solder more connections, which I thought was fun, so it’s a pretty neutral point.

When we decided to make it a 2-stage system, we learned that we had to use PGraphics to make the paint form a transparent layer that could be overlayed onto the cam footage. This was another challenging yet fun part of the project, as it was a bit annoying at first, having to learn a whole new portion of Processing, but very rewarding in the end since PGraphics is such a cool and useful tool.

In the beginning, I was programming all of this by finding pieces of code that had useful bits that I could slightly change so that it could work for our project. The problem with that was that when I put all the bits together, they didn’t play nice with each other. So I had to start it all from scratch and ensure that everything we were doing was all working together and that there were no extraneous pieces.

This course has been very rewarding for me, as I have been learning to code for a few years now but have never had anything to do with it, and this class gave me a way to implement the code that I’ve been learning, so I had a lot of fun.

import processing.video.*;
import gab.opencv.*;
import java.awt.Rectangle;
import processing.serial.*;
PGraphics pg;
int posX = 0;
int posY = 0;
boolean draw = false;
color c;
//Serial object
Serial pal;
//OpenCV object
OpenCV opencv;
OpenCV opencvface;
Capture cam;
//Face Tracking variables
Rectangle[] faces;
PImage smallerImg;
int scale = 4;
//Images for color tracking comparison
PImage src, colorFilteredImage; //ct
ArrayList<Contour> contours; //ct
void setup() {
cam = new Capture(this, 640, 480);
//create OpenCV object for face and color tracking
opencv = new OpenCV(this, cam.width, cam.height);
opencvface = new OpenCV(this, cam.width/scale, cam.height/scale);
//create smallerImg to cut demand for OpenCV
smallerImg = createImage(opencvface.width, opencvface.height, RGB);
//contours for color tracking
contours = new ArrayList<Contour>();
pg = createGraphics(width, height, JAVA2D);
//Serial object for arduino control
pal = new Serial(this, Serial.list()[1], 9600);
void draw() {
if (key==’ ‘) {
//draw image reflected so user can draw normally
scale(-1.0, 1.0);
image(cam, -cam.width, 0);
} else {
image(pg, 0, 0);
if (draw) {
void readSensors() {
while (pal.available() > 0) {
String temp = pal.readStringUntil(‘n’);
if (temp != null) {
temp = trim(temp);
int sensors[] = int(split(temp, ‘,’));
if (sensors.length > 4) {
if (sensors[4] == 1) {
draw = true;
} else {
draw = false;
for (int i = 0; i < sensors.length; i++) {
//print(sensors[i] + ” “);
if (sensors[i] > 0) {
int num = i;
switch(num) {
case 0:
c = #4A7CCB;
case 1:
c = #F03C3C;
case 2:
c = #3BB42C;
case 3:
c = #FFF94B;
} //switch
} //if statement
} //for loop
} //if statement
} //while loop
} //function
void captureEvent(Capture cam) {
0, 0, cam.width, cam.height,
0, 0, smallerImg.width, smallerImg.height);
void updateFaceDetection() {
faces = opencvface.detect();
//if faces are detected
if (faces != null) {
for (int i = 0; i < faces.length; i++) {
stroke(255, 0, 0);
//draw the image
image(pg, width-faces[i].x*scale, faces[i].y*scale,
-faces[i].width*scale, faces[i].height*scale);
void senseColor() {
int hue = 56;
int rangeLow = hue – 5;
int rangeHigh = hue + 5;
// Read last captured frame
if (cam.available()) {
// <2> Load the new frame of our movie in to OpenCV
// Tell OpenCV to use color information
src = opencv.getSnapshot();
// <3> Tell OpenCV to work in HSV color space.
// <4> Copy the Hue channel of our image into
// the gray channel, which we process.
// <5> Filter the image based on the range of
// hue values that match the object we want to track.
opencv.inRange(rangeLow, rangeHigh);
// <6> Get the processed image for reference.
colorFilteredImage = opencv.getSnapshot();
// <7> Find contours in our range image.
// Passing ‘true’ sorts them by descending area.
contours = opencv.findContours(true, true);
// <9> Check to make sure we’ve found any contours
if (contours.size() > 0) {
// <9> Get the first contour, which will be the largest one
Contour biggestContour = contours.get(0);
// <10> Find the bounding box of the largest contour,
// and hence our object.
Rectangle r = biggestContour.getBoundingBox();
// <12> Draw a dot in the middle of the bounding box, on the object.
ellipse(width-r.x – r.width/2, r.y + r.height/2, 30, 30);
posX = width-r.x – r.width/2;
posY = r.y + r.height/2;
void drawToPG(PGraphics pg) {
pg.ellipse(posX, posY, 20, 20);

Final Game Project

I struggled with this final project a bit, because I moved from one idea to another, I started doing one thing and then changed to smth else. I was trying to create a battleship GPS game, did a lot of coding, but was not able to finish and decided to do smth with no technology involved.

In the class I got an advice to create dada and I found it really interesting. I did Geocaching and only one thing that I didn’t like about it was that the geocache inside wasn’t that interesting, just a log book. So for my final project I decided to advance the idea of geocaching. Dada is a game where you have to look for geocaches in the city, which are boxes with lines of the poems in them and create your own dada poem. “Tzara, gave the following instructions on how “To make a Dadaist Poem” (1920):

Take a newspaper.
Take some scissors.
Choose from this paper an article the length you want to make your poem.
Cut out the article.
Next carefully cut out each of the words that make up this article and put them all in a bag.
Shake gently.
Next take out each cutting one after the other.
Copy conscientiously in the order in which they left the bag.
The poem will resemble you.
And there you are—an infinitely original author of charming sensibility, even though unappreciated by the vulgar herd. “

In my rules of Dada, you can mix the lines you got in any way you want. I changed a map that I did for my another project, and placed QR-code of the web page on the box. I also created a treasure hunt app using Locatify. I wanted to post my dadas on Geocaching website, so other people can play it, but unfortunately it’s not possible, because geocaches cannot link to any websites or advertisements.

First I had to make the boxes not so visible



Then put QR-code and name on it


Then cut all the papers. Good “anti-stress” during finals. Lots of cutting

vnk3cdmhtwu sswzgcjzocy

ldg7rlblq1s jfumnlwpnik

Then the best part – walking around the city and looking for a good locations to hide geocaches. I spent almost on day.

_5s6f-kc_hu i_lsm9gddo4 fmk9rdingcw

And before putting it I took papers from them to create my own poem as well.


The website:

The app:

puiosnbm3j0 ah7m7r1jkec eiihy9atz5g x7shq7an57g

LM: JSON Website – TA

For my website I decided to map Asiya’s GeoJSON data set.  I had to make some modifications to the set, and I had trouble accessing the JSON file on GitHub, so I downloaded a local version and used that with the website.  The changes I made were to ensure all the category names were consistent throughout the file, and to add an image category with the link to a free use image of each park (usually from Wikipedia).  Then it was a simple matter of modifying the Mapping API code to show each category where I wanted it.  I tried to change the point icon by just changing the color of the normal Google icon, but I couldn’t figure out the code necessary to do this.  Instead, I looked online and found a free icon that seemed appropriate for parks and used that.  The result was a plain website that shows where some of the major parks of Buenos Aires are located.  The code can be found on GitHub here.


Final Project

For my Final Project, I wanted to expand on my beacon project to make it more interactive, fun, and game-like. I added trivia questions that the player has to answer before being able to receive the next clue. These questions make the player explore the area around the intended landmark. They would have to read the descriptions of the landmark or walk around the area to be able to answer these questions. For example, when asking for the name of the farthest dorms from the NYU Prague academic building, the player would have to find a student and ask them. This will hopefully provide a more interactive experience for the player.

In the future, to add a more competitive aspect, a point system could be added. Answering questions on the first try would provide more points than multiple tries and the speed of which the landmark was located in.



<!DOCTYPE html>
			Scavenger Hunt!!!

		<em> Come Join the Treasure Hunt!</em>

	In this game you will be given clues and questions which will lead you to your next destination. Compete with your friends to see who can find the treasure firstHAVE FUN!!!

	<p>First Clue:</p>
	<p>A giant moving head</p>


<!DOCTYPE html>
			Scavenger Hunt!!!

<img src = "http://www.thisiscolossal.com/wp-content/uploads/2016/05/kafka-1.gif">

		<em> Good job finding the statue of Kafka! Answer the following question to recieve your next clue: <p> Which of these nearby places did Einstein frequently visit? (click to answer) </p>

	<div id="choice1" onclick="incorrect()" href="#">
		<h2>Cafe Metropolis</h2>
	<div id="choice1" onclick="correct()" href="#">
		<h2>Cafe Louvre</h2>
	<div id="choice1" onclick="incorrect()" href="#">
		function correct(){
			alert("You are correct! The next clue is: Imagine");
		function incorrect(){
			alert("Sorry that was incorrect, please try again.");


<!DOCTYPE html>
			Scavenger Hunt!!!

		<em> Good job finding NYU Prague! Answer the following question to recieve your next clue: <p> What is the farthest NYU Prague dorm from the academic building? (click to answer) </p>

	<div id="choice1" onclick="incorrect()" href="#">
	<div id="choice1" onclick="incorrect()" href="#">
	<div id="choice1" onclick="correct()" href="#">
		function correct(){
			alert("You are correct! Congradulations on Completing the Treasure Hunt! Your prize is being able to admire NYU Prague!");
		function incorrect(){
			alert("Sorry that was incorrect, please try again.");


Beacon Project

For my beacon project I wanted to create a simple treasure hunt using the beacons that were given to us. My plan was to create an interactive activity which would allow phone users to go outdoors and explore the world a bit more.

In this game the player will receive a clue. They will then proceed to try and locate the location that the clue is hinting them towards. When they think they have arrived, they will scan the area with their phone using the app Phyweb. When they find the beacon, they will click on the link and it will then congratulate them for finding the first clue and and give them another clue to direct them to the next location.

This a simple game which has the potential to be much more with more interactive aspects like trivia questions and things like that. This could also be expanded from a city activity to a country activity to even a global activity.



<!DOCTYPE html>
			Scavenger Hunt!!!

		<em> Come Join the Treasure Hunt! First Clue </em>

	A giant moving head


<!DOCTYPE html>
			Scavenger Hunt!!!

<img src = "http://czechmatediary.com/wp-content/uploads/2015/01/number-7.jpg">

		<em> Good job finding the Lennon Wall! Here is the third clue. </em>

	<p> Best Univeristy in Prague </p>

Saadiyat Dash: a locative media game

For my locative media game I really wanted to do something that involved the Saadiyat campus since I’ve spent so much time here. I was thinking a lot about how during my time here, I never wander. Whenever I leave a location I always have somewhere to go next and I almost never stop somewhere else on the way. Because of this, I decided I wanted to turn this daily routine of getting solely from point A to point B into a bit of a location based game.

I divided the Abu Dhabi campus into 6 sections.

I wanted to include an old game method in my new location based game, so I decided to incorporate dice. To play the game, you roll one die three times. The first role determines point A, the second role determines point B, and the third role determines one of 6 conditions.


For example, If I rolled a 1, a 5, and a 1, I would have to get from the Campus Center to the center of the A5 building. In addition to this, I would not be allowed to come into contact with the sun. The game is meant to be played during the daytime between two people, or in teams of 3. There are different ways that you can win each round depending in the different conditions.

The conditions were specifically chosen because of the landscape of the Abu Dhabi campus and its architecture.

Some interesting findings during play testing.

The parking garage was a large asset to people who wanted to avoid the sun. However, it is very easy to get lost in the parking garage, especially if you do not know the different corridors and their numbers.



Some parts of the parking garage are only accessible by elevator, so this definitely affects movement from one part of the campus to another.

A lot of the campus incorporates different triangle elements into its architecture, which is where the idea for the condition about not being able to step on triangle tiles came from. This was a really tricky one, and required game testers to use a combination of the high line, the parking garage, and the ground floor to get from point A to point B.