[Capstone] ZZ’s Documentation | Exposed & Interlaced


Exposed & Interlaced – Exploring Motion in Analog and Digital


My project is a sequence of imaging chronicling a dance performance into 10 different fragments. The viewer will be able to view each fragment as a long exposure film print, as a representation of analog means, and an interactive lenticular that will allow the viewer to see a short video clip by changing their perspective, as a representation of digital means. For the final presentation, the fragments will then be displayed sequentially to communicate the entire dance performance.


This prooject

Conceptual Development:

  • This idea was formed with my own experiences here at NYU Shanghai and also when I studied abroad. I took the Intro to Film Photography class in NYU Prague with the great artist and Professor Bara Mrazkova. I loved every second of the class and the time I got to spend in the darkroom, the unexpected surprises along with the special personality of the film. However, more importantly, although we live in an analog world, but almost every single aspect has been heavily digitized. When you go to a concert, you see a wall of iPhones screen lit up; when you go to a museum, you see people with lenses capturing people striking poses with art works; people read pdfs; etc. Almost all the analog media have at least one or multiple replacements that feature convenience on user’s side when it comes to ditribution, utilization and manipulation.
  • Analog media seems a lot more valuable because of the delicate nuance. Unlike digital media that can be created, viewed, distributed, modified and preserved on digital electronics devices, analog media is more difficult to be edited and distributed. So I would like to visualize the differences  between analog and digital medium.aptu
  • I also took a class with professor Moon last seamster called Kinetic Interfaces, where I had the chance to experiment with Microsoft kinect, which is a motion sensing device, and I did a dance collaboration with my friend Ann.
  • -So I would like to continue having dance performance as the content that shared by these two approaches. AJ introduced lenticular to me. Lenticular printing is a technology in which lenticular lenses (a technology that is also used for 3D displays) are used to produce printed images with an illusion of depth, or the ability to change or move as the image is viewed from different angles.
  • -So I came up with the idea of this project as a culmination of my skill set as I am applying the film photography techniques I learned in Prague with the Kinect motion sensing skills I learned in IMA. A wholly original concept, this project dares to mix means of media that haven’t been mixed before and is an opportunity for me to contribute to the fields of work I have always admired in motion capture, film photography, and performance based art, while starting quality documentation in areas that have been ill documented, like lenticular printing.
  • talk more about your ideation process*
  • what led you to this idea?
  • what references have you looked at?
  • how are you inspired by your references?
  • how can you apply the inspirations to your project?

Technical Development:

  • what is the process(steps) of your technical development?
  • what does your timeline of development look like?
  • what kind of tools and/or technology will you use?
  • what technical references have you looked at?
  • what kind of technical challenges do you expect to encounter?
  • what are the actual dimensions of your device?
  • what kind of material(s) will you be using?
  • how do you expect the users will interact with your device?

Susie Chen_Capstone Interactive Project (Roopa Vasudevan)

Interactive Project: Auto-Contractible Hoopskirt

My project is about hoopskirt. Because I am a big fan of Lolita fashion and Cosplay, I usually use a hoopskirt to support my dresses to have a graceful shape. However, it is not that convenient wearing a hoopskirt pass the subway gate or walk through crowded people. Thus, the specific idea I wanted to explore is to create a new user experience in the hoopskirt. And for me, I hope this project can become a real product that I can use.

To start with, I firstly think about make use of the mechanism of the embroidery frame, which is to use a motor to spin the screw and then change the size of the hoop. However, After I did some calculation, I found that I need a 60cm long screw to shrink the radius of the hoop from 60cm to 50 cm, which is impossible. Thus, I started to follow another tutorial to create my project.

Inspiration & Tutorial: Self Space Dress: https://urbanarmor.org/portfolio/the-personal-space-dress/

Materials Used:
Belt: https://item.taobao.com/item.htm?spm=a1z09.2.0.0.w4dSDq&id=45171266740&_u=25glkv90b1c
Umbrella: https://item.taobao.com/item.htm?spm=a1z09.2.0.0.v9rtxS&id=526394244756&_u=7ol0r8oed5c
Screw Rod: https://item.taobao.com/item.htm?spm=a1z09.2.0.0.oZ3txF&id=43294200259&_u=72uif0vm27f8
Gear Motor: https://item.taobao.com/item.htm?spm=a1z10.3-c.w4002-2299960975.26.mVJORv&id=544162533852
Switch: https://item.taobao.com/item.htm?spm=a1z09.

Step 1: Take apart an umbrella.
Start by cutting the strings that hold the fabric to each leg. Do this until each leg is free and you can remove the fabric entirely. Most umbrellas are secured in two places at the top and middle, by a ring of wire. Cut or untwist the wire and slide each piece off of it. And you will end up with 8 free umbrella arms. Observe how your umbrella arm expands and contracts. As the umbrella arms are too long, I cut the end piece and now it works like this.

Step 2: Make the belts and attach the umbrella arms.
Cut two belts according to your waist and hip, punch two holes and the end of the belt and use string to tie the belt. Based on the holder of the screw rod, punch holes on the belt. I am using 6 umbrella arms so that I divide the hoop by 6 and also punch holes. Then I tie the umbrella arms onto the belts.

Step 3: Install the linear actuators.
The mechanism of the umbrella is that when Point 1 and Point 2 are further apart, the umbrella is closed. When Point 1 and Point 2 are close together, the umbrella is open. Thus, you need a deodorant linear actuators to bring the bottom belt up.

A linear actuator looks like this:
largeHowever, the lightest deodorant linear actuator that fulfills my request is still about 0.8kg and 2cm in diameter, which is too thick and heavy for a hoopskirt. Besides the deodorant sticks used as a substitute in the self-space dress case is not that easy to control and the servo motor may be not that powerful enough, I change to use the screw rod, which has the same mechanism with the deodorant sticks. and they can work with a really small gear motor. Now it looks like this:
IMG_20170305_110812I tried it on and spin the motor manually to see how it works:

Step 4: Install the battery and switch.

Step 5: Add the cover cloth and test.

I posted a short video of how my auto contractible hoopskirt works on my Weibo page, and it gets over 1000 retweets in about 4 hours.
QQ20170425-115151@2xLots of Lolita fashion lovers are interested in my design and express their eagerness of such kind of product. Therefore, I think it is a successful project because it accomplishes my goal.  However, these lovers also provided my various comments that my hoopskirt still has a long way to become a real product. The most questions were asked includes: Is it heavy? How does it charge power? How about using a wireless switch or APP to control it? Is it waterproof? etc. For now, all the material are from Taobao, and I think that if I want to make a real product, I need to cooperate with factories to customize every single part to make it a better shape and a more comfortable wear.

Overall design:
1: Belt 2: Screw Rod 3: Gear Motor 4: Holder 5: Umbrella Arm 6: Connection between the screw rod and the gear motor 8: Switch 9: Battery

Presentation PPT: https://goo.gl/SO7bjP

[ THE LIBRARY ] Baaria’s capstone documentation

-For my interactive project, I wanted to create a VR experience that explored immersion. I’m studying immersion for my research essay and I find immersion fascinating in how it draws a person from reality and into a different world almost seamlessly. I wanted to explore the concept of world-building in VR and how to design immersion. In the end, I chose to create a library and create my experience around getting “lost” in a book. I wanted this experience to represent the surreal element in fantasy as well as my personal love of books. I also thought books were a good representation for immersion as each book can be construed to be an “immersive world.”

I feel that my presentation last Monday was not satisfactory in explaining what exactly my project is so I hope this documentation will help clarify all the work I’ve done over the past ten weeks. I apologize for the length but I wanted to be as detailed as possible.

2017-04-24 11_49_22-CPSTN_ A Disavowel - Google Slides


Currently, my project is a library hidden only in VR, a strange and magical one in which you can explore different worlds within books, whether they would be reflections or a Venice Street. I have one world within a world within a world at the moment, which are the mirrors, only one fully fleshed out road that the user could take. In future installments of my project, I wish to add more worlds such as a Wonderland world and a Hogwarts Magical world.

At the current stage, my project consists of three main scenes. The main and starting scene is the Library, which features giant books, normal books, and tiny books cluttered around, as well as elements from different stories, such as a pumpkin carriage from Cinderella which plays Bibbiti Bobbity Boo as you approach it. Likewise, there is a sword in a stone, which represents the story of King Arthur, and plays the “whoso takes this sword” narrative as you approach and you can take the sword out. As you continue in book world, there is a venice street, reminiscent of my childhood favorite novel, The Thief Lord by Cornelia Funke. I wanted to pay homage to my favorite mysteries.

2017-04-17 01_21_57-Unity 5.5.0f3 Personal (64bit) - Book World.unity - HomeBase - PC, Mac _ Linux S

Venice Street, tucked between two rows of books

2017-04-17 01_22_08-

2017-04-17 01_22_27-Unity 5.5.0f3 Personal (64bit) - Book World.unity - HomeBase - PC, Mac _ Linux S

Spiral staircase of books.

2017-04-17 01_24_16-Unity 5.5.0f3 Personal (64bit) - Book World.unity - HomeBase - PC, Mac _ Linux S  2017-04-17 01_30_12-Program Manager

To add to the element of immersion and worlds within worlds there are currently 2 portals in the Library that lead to worlds. One is a wardrobe which leads to an endless sky of mirrors, reminiscent of the Lion, the Witch, and the Wardrobe. Another portal sits on the top of a staircase made of books which lead to the top of a spiral staircase in a mirror maze. There are many interactable objects in the Book World and it serves as the main portal to all the other worlds.

Go up the book staircase to enter the mirror maze…

2017-04-17 01_24_52-Unity 5.5.0f3 Personal (64bit) - Mazes.unity - HomeBase - PC, Mac _ Linux Standa

2017-04-17 01_31_21-Program Manager

Trapped in a Mirror: in this deadend in the mirror maze see some spirits trying to escape from the mirror!

2017-04-17 01_31_26-Program Manager

2017-04-17 01_31_34-Program Manager

2017-04-17 01_31_45-Program Manager

This dead end features mirrors in a public bathroom.

The mirror maze is the second world. It consists of different types of mirrors and explores the concept of reflections. There is a hall of mirrors where recursion reflects you to infinity and a carnival like dead-end where your reflection is transformed into a clown. In other mirrors, you see yourself as a screaming mask and you’re in a public bathroom. The mirror maze is randomly generated through code and the different ‘selves’ you see yourself as are achieved through raycast.

And into the endless sky…

You can go through the mirror maze to the end, where there is a portal to the endless sky world. The endless sky world is the last world, which also can lead back to the mirror maze as well as the library world or another world, thereby creating a loop and fully completing the feeling of getting ‘lost’ in immersion. The endless sky scene is created using procedural generation, perlin noise, and object pooling. The mirrors are activated via raycast.

2017-04-17 01_27_47-Unity 5.5.0f3 Personal (64bit) - Mirrors.unity - HomeBase - PC, Mac _ Linux Stan

2017-04-17 01_33_05-Program Manager2017-04-17 01_32_47-Program Manager

2017-04-17 01_32_07-Program Manager

The Mirror Hallway: You can go through some of the mirrors in the endless sky scene. When you go inside the mirror, you get to a hallway that you must cross in order to come out of the mirror on the other side.

2017-04-17 01_32_29-Program Manager

If you fly far enough in the endless sky scene, you will get to this mysterious floating island that has a lighthouse (currently in development). I personally am a big fan of lighthouses and I decided to add this island to have more depth to this scene other than just flying through the mirrors. In the future, I hope to populate this world even more.

2017-04-17 01_27_17-Unity 5.5.0f3 Personal (64bit) - Mirrors.unity - HomeBase - PC, Mac _ Linux Stan

2017-04-17 01_27_24-Unity 5.5.0f3 Personal (64bit) - Mirrors.unity - HomeBase - PC, Mac _ Linux Stan

2017-04-17 01_27_32-Unity 5.5.0f3 Personal (64bit) - Mirrors.unity - HomeBase - PC, Mac _ Linux Stan



IMG_6842 IMG_6843

First paper prototype

Initially, my project idea was to focus on a story with a story within a story, similar to Inception, but with a more focus on being dreamlike and bizarre. The narrative of the experience was that it was a dream that led you through a world of books, a world of mirrors, and a world of mazes, each world entirely separate and disconnected from the other worlds. Below are some pictures from my initial prototype.

IMG_6838        Screen-Shot-2017-03-05-at-4.38.19-PM

At this point, the world is all you could see in      This was a basic maze that had cubes that blocked   this picture here. A stack of books and a flight.  entrances and you avoided running into them      It didn’t develop into the full world until later.

IMG_6840 IMG_6839

In this iteration, the mirror world was a floating checkerboard platform in which you could view different reflections of yourself.



As you can tell, my project changed a lot from the initial stages. Even though the core concept of going into something more and more is prevalent till the end, I changed a lot of the underlying narrative behind the experience. This was because throughout the process I felt that the narrative of the experience wasn’t clear so every time I accomplished one small thing, I felt stuck because I didn’t know what to do next. It wasn’t coming together. I felt like I had hit a wall. Going back to the storyboard and brainstorming some more about what else to do to make the experience more intricate helped, but it still wasn’t enough.

Anna suggested to overcome this block I focus just on one world first and developing that fully. I chose to work on the mirror world and ended up combining the endless mirror scene with the maze scene to make a more cohesive mirror-into-mirror-into-mirror inception. This helped a bit, but I also don’t think my workflow is suitable to just working on one thing at a time. My usual ideal is working on several projects at once and hopscotching between them whenever I get bored or stuck–that way I’m always productive, even if I’m procrastinating on one aspect. But while working on the mirrors alone helped me progress further, I wasn’t as creative in my ideas regarding the project as I was usually.

Then over the break, I went to Tokyo and I visited the Studio Ghibli Museum. Seeing the quirkiness of the museum and being reminded of the magical element in Hayao Miyazaki films inspired me to change my idea. I initially also wanted to a create a surreal experience similar to that of Miyazaki films: I wanted the user to feel lost in the magic. Being reminded of what true film magic was helped me return to my project with a fresh lens.  I now approached my problem through the lens of Miyazaki : what would Miyazaki do? How would Miyazaki design a maze? How would Miyazaki’s library look like?

Looking at my project this way enabled me to come up with a better narrative for it and I tried to work so my project followed that narrative. So I changed my idea from being a loosely connected dream to being a more tightly woven narrative of a strange library, located only in VR, with an eclectic collection of oddities, adventures, and mirrors.

Of course, I changed my project to reflect this mindset only after the break when I only had one more week to change my project. I tried my best to incorporate what I had already worked on within my experience but of course, there were going to be loopholes. I think this is why my project might have seemed a bit confusing when I presented it last Monday. I hope this helps to clarify.



  • Steam VR (solved)  — for the first three weeks I worked on this project, there were issues setting up the SteamVR plugin to run the HTC vive on Unity. It wasn’t connecting or working and it was a different problem each day so that cut into my working time by an hour each and every day for three weeks
  • FPS rate (solved) – Partly because of the CPU load of rendering multiple mirrors in a scene at once, my FPS rate was consistently low for the longest time, dipping frequently below 60 FPS, which is a big no-no for VR experiences. If the frame rate dips too low, it causes eye strain and the inner ear (which is responsible for motion sickness and balance) to feel disoriented, thus causing nausea. Luckily it was only me experiencing it! I solved this problem by baking the lighting so it did not render real time and turning off mirrors up and until the VR headset sent a raycast toward it. In that way, the only mirrors that would render would be the ones within x-distance of the player when the player was looking at the mirror.
  • Mirrors! (semi-solved) — Rendering mirrors in VR is especially hard as the left and right lens of the headset are technically each a different camera so it looked a bit odd from both lenses. I fixed this eventually by adding a stereo renderer plugin. But because of the logic of the mirror in the game engine, mirrors are actually cameras centered in the center of the mirror frame that display what they see on a plane on the mirror frame. I solved this issue temporarily using the stereo renderer which renders for both left and right eye cameras, but there are still some bugs.

Creative Block — As I said earlier, I had trouble throughout this project, up until my Miyazaki trip, with figuring out exactly what I was going to say. Partly the problem was that I wasn’t working on this project with Marjorie Wang — my usual collaborator in VR projects. We worked well together as a team and motivated each other to do the work. Now I had to motivate myself. I solved this by storyboarding every single time I got stuck.

IMG_7107 IMG_7108

Storyboards and notes for different interactions.

IMG_7109 IMG_7110 IMG_7111 IMG_7112

IMG_7113 IMG_7114 IMG_7115 IMG_7116



Week 1 { 02/6 – 02/10 }  : Brainstorming & Deciding on an idea for the VR experience (research on what other people have done)

Week 2 { 02/13 – 02/17 }  :  Storyboarding the idea and doing the paper prototype (pics of paper prototype shown in above in different section)

Week 3 { 02/20 – 02/24 }  : Starting gathering the assets for unity project & setting up the scene in Unity. Looking up tutorials and learning about procedural generation. Google Cardboard prototype (pics above in different section).

Week 4 { 02/27 – 03/03 } :  Break from project (emphasis on essay) . Also struggled with steamVR technical problems at this time which severely prohibited from testing my project.

Week 5 { 03/06 – 03/10 } : Break from project (emphasis on essay). Also struggled with SteamVR technical problems at  this time which severely prohibited work on project (on some days, took over an hour to debug).

Week 6 { 03/13 – 03/17} :  Populating the mirror maze with interesting dead ends and user testing with people on their opinions. (everyone got lost, some vertigo due to FPS). Introduced interactions, teleporting to move around, and flying in a maze of mirrors.

Week 7 { 03/20 – 03/24 } : More user testing with project and working on how different images are construed differently (many people looked at the mirrors not as mirrors but as portals) and how that could be used to make the experience seem more effective. Started work on getting the FPS rate higher and researching how to optimize experience for VR.

Week 8 { 03/27 – 03/31 } : Working on finishing up the mirror worlds: both endless sky and mirror maze before break so that after break, I can focus on the other worlds. This includes adding more detailed dead ends to the mirror scene such as the carnival, installing the stereo renderer plugin, and working on the FPS rate.


Week 9  { 04/10 – 04/14 } : Storyboard for new concept of Library. Re-vamping of book world to resemble new narrative and final touches on mirror scene and project overall for presentation.

Week 10 { 04/17 } :  Project presentation due .



For my project,  I used Unity3D and C# scripting as well as Blender 3D models. I’ve used Unity3D and C# before for previous projects so I was comfortable in using the program and didn’t feel too overwhelmed to learn the basics.

During the course of this project, I learned how to deploy game techniques to make the process of making a game more efficient such as procedural generation, use of Perlin noise, and object pooling. I also learned the hunt-and-kill method of random maze generation as well as endless terrain mechanisms. Furthermore, I learned more about environment design in games and world-building and was able to look at this project not just from the lens of a programmer but also as an artist.



I plan on continuing to develop this idea. Now that I know I want it to be a strange library, the next steps are pretty clear. I would like to create a beginning scene where the player actually ‘registers’ for a library card–a scene like this could also set up the narrative so the player understands what is going on.

Furthermore, I’d like to implement the game mechanic of literally reading a book in VR. This is something I have already been working on. I’ve already succeeded in getting Alice in Wonderland read in the 2D world in Unity–next steps would be to add VR integration.

As far as worlds go, in future developments, I would like to create a Wonderland world, based of Alice in Wonderland that the user can explore, and one world on Neverwhere, the book by Neil Gaiman about the underground powers of doors. I also would like to create a hogwarts magic-themed world.

I also need to focus on more user testing and fixing bugs. Experiences like these aren’t truly successful until it is refined and all the wrinkles have been smoothed out.

Even though I struggled with this project in the middle, I’m quite happy with the direction this is going. I want this project to be a side project I continue on adding to in the common months to the point where it is in its entirety a library of worlds.

Bafang Womansions–Marjorie Wang’s Capstone Project

Bafang Womansions
Bafang Womansions is a virtual reality time capsule of my home life in the last year of university (Fall 2016 to Spring 2017). Virtual Reality is a wonderful way to capture the essence of my apartment, my friends and roommates, and our memories, as the virtual space can be revisited at any time. To document my senior year, I 3D modeled the space and furniture to scale, included as many personal details as time would allow, in the form of objects that represent moments in the past year, and added life-size 3D models of my roommates.
The Problem with the Concept
Last semester, I was working mostly on creating virtual reality experiences and games, with emphasis on improving the ease of user interaction with a virtual, and thus unfamiliar space. In collaboration with Baaria Chaudhary, we created Hyperspace VR, a research into the usage of sound as a trigger to direct user interaction. In our second project, The Last Star System, we incorporated more game-like elements to a space exploration experience, where the user travels to surrealist planets of our imagination, in search for life. At the end of the semester, we had cultivated a good intuition for designing virtual reality worlds, and I was completely comfortable quickly creating low-poly 3D assets in Blender for prototyping in Unity3D, for the HTC Vive.
The problem was, Baaria and I work so well together because I am the 3D modeler, the environment designer, the artist behind the visuals, while she is the programmer behind the C## scripts running the interactions. Thus, I attempted to formulate a project that would allow me to focus purely on the work I love to do, which resulted in concept one. Concept one was creating a series of nature CGI scenes of a single alien world with Cycles render, Blender’s render engine, viewable in the Google Cardboard. I discovered that the most recent update of Blender supported stereo equirectangular rendering, which easily allowed me to preview my renders in virtual reality. However, I began to run into problems when I had capstone meetings with professors and advisors. I kept talking about my passion for 3D modeling, for creating the world, and pushing the user experience to the side. Concept one went through several iterations. One was a puzzle game, where the user would attempt to cross a lake by placing objects as a walkway. Another was an exploratory experience playing with a sense of scale and detail. Yet, no idea stuck to me and I spent the first few weeks of the capstone process doing several Youtube tutorials of rendering natural objects in Cycles.
The change happened when I scrapped my first idea and began 3D modeling my apartment. Prior to this point, I found myself failing to open Blender, as I was uninspired by my concept. After a consultation with capstone advisor, AJ Levine, I was able to conceptualize my new idea, to describe why I enjoyed 3D modeling my apartment so much more than creating a relatively impersonal nature scene. The idea became: to document my senior year, I 3D modeled the space and furniture to scale, included as many personal details as time would allow, in the form of objects that represent moments in the past year, and added life-size 3D models of my roommates. Once I began describing my project in this way (as a VR time capsule, built for my roommates and I) I found myself happily modeling the objects in my apartment to as much detail as I could. Every time I modeled a new object, I considered the stories that it tells, and in discussion of my project with friends, we recounted our different memories surrounding the same object.
The 3D Modeling
To create a VR time capsule, I needed to retain a level of realism in my models. Although I was no longer aiming for photorealism, such as that of CGI, I needed the space to retain the essence of my apartment. To do this, I used several techniques of photorealism in the modeling work. Although I began by eyeballing the dimensions of the space-windows, doors, walls, and furniture-once I began to measure every object I was modeling, it was much easier to set up a space that seemed realistic, no matter the placement of the individual objects. Another technique I used is beveling. In real life, no object can have a perfect edge. Once I began to bevel the larger pieces of furniture, the effect was perhaps imperceptible, but significant. The last technique I tried to use is the importance of added seemingly extraneous details for achieve an overall greater sense of realism. For the apartment to look lived in, I needed to add the ugly AC unit, the paintings on the walls, the books in the shelves, and the empty water bottles. I would say that this aspect of the project gave me most trouble, as it was the most constrained by time. Here is where I will work on, when I continue to expand and improve the project.
2017-04-17 02_04_19-Unity 5.5.0f3 Personal (64bit) - Apartment.unity - marjorie wang capstone - PC,
Settlers of Catan Board, a game we play several times per week.

2017-04-17 02_03_54-Program Manager
Interacting with the different objects in the scene.

2017-04-17 02_01_09-Unity 5.5.0f3 Personal (64bit) - Apartment.unity - marjorie wang capstone - PC,
Photoscanned model of Kate and a photoscanned model of me.
SketchFab Link to an earlier version of the Apartment Model
The Photoscanning
The project allowed me to become much more comfortable with photoscanning. I photoscanned myself and two roommates (out of five people I wanted to include), with the Structure Sensor and the Skanect application on the iPad mini (with invaluable help from Kyle Greenberg). I experimented with scanning using Skanect and scanning using itSeez3D. Whereas Skanect models are lower resolution, they are better equipped to be rigged and animated with Mixamo. Itseez3D models were higher quality and great for static poses.
What Changed?
What changed between concept one and concept two? The time capsule allowed me to design interaction for myself and the people closest to me. The way we interact in a virtual version of a space we’ve taken a year to personalize will be far different from the way players would have interacted with my concept one project. With my capstone, I wanted to further explore the medium that I love to create in, 3D modeling, and the capsule allowed me to focus on the process as well as the end product I presented in class. Most importantly, Bafang Womansions became a personal project to remember what I love in a medium that I love.
Tech Overview
I used Blender for 3D modeling, Unity3D to place the scene together and to add interaction, the Structure Sensor+iPad+Skanect+itSeez3D to photoscan the bodies, all developed for the HTC Vive with help from the SteamVR plugin and VRTK.
Successful? Next Steps
The next steps are to continue modeling and placing objects into the virtual space to help the Bafang Womansions to remember our time in Bafang Mansion. I would also love to begin recording my friends and I as we hang out together around our dining room table and place these audio clips into my time capsule, which will give a deeper sense of presence for the eventual player.
I believe that I was successful in finding a project that represents me quite well. However painful the journey of discovery was, I am satisfied with the final concept. As for whether the project itself is successful, only time will tell. The questions that remain unanswered is, will the virtual space allow my roommates and I to remember our senior year? Will we even use it? As Matt pointed out during my presentation, will the technology that the project is hosted become obsolete, and therefore render my project unable to be accessed? My hope is for the five of us, Baaria Chaudhary, Katarzyna Olszewska, Saphya Council, Efae Nicholson, and me, Marjorie Wang to remember the wonderful times we have had, in the past four years of our friendship and to continue to make memories together.
Throughout the capstone process, I began to see the value in the process of ideating a project and choosing to go forward or to go in a different direction. Clay Shirky’s advice during the first week held true throughout: when there are two paths to take, don’t spend time deciding which is better; just try one. For this realization, I would like to thank the entire capstone faculty. Thanks to Owen Byron Roberts for being an incredible professor during the Fall 2016 semester and kickstarting my love of Blender and giving me a Unity3D foundation. I’ve never had a professor go so above and beyond in helping our projects achieve higher potential. Thanks to Kyle Greenberg for inspiring my usage of photoscanned models and sharing his knowledge with me. Thanks to Christian Grewell for creating a space for uninhibited, stupid creativity, and providing the technical support needed to get our projects working. My biggest thanks goes to AJ for being a wonderful capstone advisor, for sitting down with me time and time again to brainstorm with me.

Jack’s Capstone – Log (Greenspan)

This is a chronological log of what I have done for this project.



  • Logo updated
  • Oil-based marker tested on canvas, nib needs to be pressed to release more ink
  • Randomly generated work tested, should implement some algorithms for the generating process


  • Artworks uploaded to robotart.org


  • Shirky’s eye done in four pieces, each 10 by 10 pixels
  • Cursor drawing done
  • Nicole drawing done
  • Console info added in preview window

Continue reading

Jane’s Capstone Project -Shanghai Corners

re-My website’s link: www.shanghaicorners.world

If your browser cannot fully support the website, learn more by watching the demo video here:

To see the full code, please click the github link: https://github.com/Jane1118/IMAProject

(P5.js code has been attached in this document)

The Presentation’s link is https://drive.google.com/a/nyu.edu/file/d/0B4QmSDB_zlI8ZDVEZE9WcHZ2aEU/view?usp=sharing



This project is a photo- and audio- based website containing an interactive map of the city of Shanghai. By using a combination of a characteristic view of a block with 3D sound, viewers can select places on the map with a curiosity of what’s like to actually be there.

Two major works are required in this project. One is data collection, including photo taking, audio recording and interviewing, and the other is website construction, including building a framework and interactive animation by using HTML, CSS and JavaScript, creating a map on the basis of Google API, Mapbox and Leaflet, and beautifying the interface and designing icons by using Adobe Photoshop and Illustrator.


Today’s technology allows us to explore any street of the city by map or other information from the website. But it is hard to experience the feeling. A lot of my personal experiences trigger this idea. I regard myself as a city explorer and enjoy wandering the city. And I realized that the unexpected and surprising things always happened and impressed me a lot when I just turned a corner. I clearly remembered that an old man wrapped within a red-blue-white laundry bag made out of nylon canvas huddled under a LV advertising board consisting of numerous small lamps at the corner in a raw cold winter. I also remembered that a couple running a barbecue stall yield in a dialect at their kid who didn’t get satisfying grades while the opposite to this stall at the corner is the tumultuous bar street full with fashionable youngers coming from the world. The sharply visual and audio impact in architecture, people, culture, or just some things around the corner inspired me.  By documenting the sound and capturing the moment in corners in Shanghai, from the most cosmopolitan place like Jing‘an Area to the rather rural and developing area like Baoshan Area, I am going to discover the stories behind the city, share impression and expectation and offer a comprehensive perspective of the city and its culture to website viewers.


  • Programming

To show the characteristics of corners, I choose the map to display photos and sound to construct an interactive website. At first, I planned to use P5.JS with Mapbox API to create this project since P5.JS has a very good function of applying sound with animations and easy to deal with the GeoJson file. 1

But here came problems. I loaded the map from Mapbox as a static picture which cannot zoom in, zoom out or change a view position. Moreover, I cannot well insert P5.js file into my HTML framework.

So I gave up using P5.JS and turned to use Leaflet.js with JavaScript, CSS, and HTML to construct a map frame and added photos or logos that can implicly express the place as icons with brief descriptions in tooltips. 3


However, this frame is hard to display the diverse traits of a corner. Thus, I turned icons to common markers and tooltips to the layout.6

By doing a field trip, I realized that three photos tend to properly display the traits of a corner and I constructed an independent responsive page in three-picture layout.  I used <ifram> function to insert the three-layout page into the map page.7

Since I want to give viewers a sense of curiosity of what is actually be before they see the picture, I desynchronize the sound and picture by using the mouseover function in JS. When views feel interested in the sound, they can click the marker to see pictures while still listening to the surrounding sound.

  • Field Trip

The field trip and data process are the most important parts of my project. Based on my previous personal experiences and interviews with both local Shanghainese and international students, I decided several places including Yongkang Road, Wukang Road, Jiashan Road, Jianguo W Road, Anfu Road, Jiangyin Road, Moganshan Road, Dongjiadu, Qipu Road, Menghua Street, Longchang Apartment, Huaihai M Road, Lujiazui, Jiangning Road, People Square etc, over 7 areas in Shanghai. These places are actually iconic and regarded as the stereotype. For example, Yongkang Road is well known for concession with a lot of foreign style villages and fancy cafes; Huaihai M Road is a famous shopping street full with luxurious brands; Qipu Road is known for the big clothing wholesale market under the hustle and bustle.  But my personal experiences at these places broke my stereotypes. But in my final work, I selected thirteen most interesting points on the map.

Field trip took me around four weekends and subway, Share Bikes and walking are my way to explore the city.9

The google album’s link is here: https://goo.gl/photos/CiTk8UFAqK2kPhKM9

  • Polishment
  1. Add a navigation bar. I divided the markers into 5 area and use pinTo function to position the marker to reduce the inconvenience of dragging the map. Moreover, based on the feedback from user testing that navigation bar was not recognizable, I made a dynamic icon for trigger and redesigned the bar.
  2. Create an entry animation with P5.JS Amplitude function. Processing and remixing the audio file I recorded from all places, I made a 30-second audio for entry animation. And I preloaded a picture of Oriental Tower as a landmark for Shanghai in P5.JS and made the width of pixels of the picture change with the amplitude of the sound. I feel like this animation will help viewers better explore the project. But the p5.js cannot be inserted into my HTML file, I made a screenshotting.

Evaluation of user testing:

  • More impressive than previously imagined (people do have stereotypes)
  • Sounds are very diverse, even the voice in lots of languages
  • Bring people closer to the feelings of the city, of the corner, not just surroundings and information


  • Present photos in a more straightforward way to show them in the same corner
  • More places – outskirt of the city
  • Improve the writing style of description
  • Make the description draggable to see the details of pictures more clearly.
  • More interactive: markers clustering / hover effect
var img, sound, amplitude;

function preload() {
  sound = loadSound('webopen.mp3');
  img = loadImage("data/tower.jpg");

function setup() {
  createCanvas(346, 597);

  amplitude = new p5.Amplitude();

function draw() { 
 var level =  amplitude.getLevel();
  var tileCount = map(level*1000, 40, width, 35, 40); //10
  var rectSize = width/tileCount;

  for (var gridX=0; gridX < tileCount; gridX++ ) {
    for ( var gridY=0; gridY < tileCount; gridY++ ) {
      var px = (int)( gridX * rectSize );
      var py = (int)( gridY * rectSize );
      var pixel_color = img.get(px, py);
      rect(gridX * rectSize, gridY * rectSize, rectSize, rectSize);


Interactive Project Documentation



My interactive project is to play with time by creating some “ends” in an endless game in order to make time feel shorter than actual. I explored the characteristics of music and apply them in the endless game to create some “ends”.


I am indeed not a good game player, especially in complicated games like Dota or LOL, etc. Therefore games that have simple rules like endless running games, like Temple Run, are always my favorite. When I played Temple Run, it is true that the game is easy to master and also nervous. Within the first 5-10 minutes, it is an interesting game. However, after playing about 15-20 minutes, even if I have died at some point and restarted, the game became gradually more boring, because I have experience all the scenes for several times, and there’s not much to expect. The only thing that keeps me playing, is to break the record. But during the playing time, time seems to be longer than actual.

I did a small survey among my family and friends who all have experience in endless games. The majority of them also told me that they quit an endless game mainly because the record is so high that they can expect a long boring journey before breaking the score. Therefore, I am really interested in how I can make the process of playing an endless game more interesting so that time feels shorter.


My inspiration is a gradual process. I start from how I can make time feel shorter. We know that when we have a long way to go, milestones makes things better. In an endless game, theoretically, the game never ends. So I am curious what if I add some “ends”, setting up some “milestones” in an endless game to make the game more dynamic, and to make time feel shorter.

Then I looked for possible solutions to this question. I realized that I have also played a game called rhythm master, in which the player taps on the color zone according to the music rhythm, which is shown below. Because there are numerous music in the world, the game is always different. So if I also introduce such a big music library into an endless game, the time could be separated according to music length, in other words, set up the “milestones”.

TBD: Rhythm Master!!!!!!!!

Since there are more features of music than its length, I also dig into the music to see if other music features can also be used in my project.


To be honest, I have changed my project a lot during the making process due to idea iteration,technical issues and time constraint.

First, in terms of idea iteration, initially I only have a vague sense of what I want to do. I simply want to add features in Rhythm Master to Temple Run to make the endless game more interesting, but I am not sure what exactly such application would bring to me. Since there are so many different features in Rhythm Master and Temple Run, I spent a lot of time figuring out what features to keep and what to abandon. In fact I have been struggled with this for several weeks, though at the same time, I did started to make some technical progress.

During a meeting with Prof. Greenspan, she pointed out that what matters in my game is “TIME”.  When I reconsider her words, I found that the music feature in Rhythm Master is the most important one to be used. So I dig into the music features and choose the ones that I can use. And finally I came up with a clear idea of what to do.

Then, in terms of the actually making process, I chose to use Unity from the very beginning, because Unity is the game engine of Temple Run, as well as a lot of other famous games, which indicated that the engine is mature enough. Also, I have some basic knowledge of Unity from my previous class. So I thought it may be easier for me to learn more about Unity. I decided to make 2D game, because I did not want to spend too much time on building up a 3D model for my game, which is not the central idea of my project.

The details of the game can be found here:


User Testing: I invited some friends to player test. After they played the game, they suggested that the game is a bit too simple with only three tracks. Therefore, I tried the simplest way to improve it: adding two more tracks to the game. It turns out that although the method is simple, it actually works! The users are getting into the game this time.

Lastly, due to time constraint, I was not able to finish everything I expected. This is mainly because I did not fully develop my idea in the first several weeks. So when I finalize my game, I did not have enough time to add more external stuff. Although I have studied a lot on technical problems like how I can generate in a 2D game a similar idea to “turning left and right” in a 3D game, I did not have enough time to apply it to my game. In addition, for the pictures I used the ones from Google, because I was not able to make my own due to time constraint.

Game Functions:

The game aims to eat as many candies as possible and avoid the wooden sticks. If you fail to eat one candy, the music as well as wooden sticks speed up. If you eat the medical box, you can speed down the music and the wooden stick.

Here are my applications of music in the game:

  1. I used music of different length from 18 seconds to 4 minutes. This is to give the players a basic sense of the game. Later the length of the music would be the same as the actual music length. My idea is to make the music length move as a hill, that is, short musics at the beginning, long ones in the middle, and short ones later, and then long ones, and so on. This is to avoid fatigue.
  2. I also changed the pitch in the music. When you miss a candy, music pitch goes up as well as the obstacles. So when the player hear the music suddenly faster, they would be more nervous accordingly.
  3. I chose 2-4 musics of different music styles to avoid auditory fatigue. Between each two styles, I added a skippable 10 second break to 1. give the player some time to rest, and 2. prepare for such a change.
  4. Because of the big music library, it is easy to avoid hearing the same song again in one round, which makes the game more dynamic. Also, the game still keeps the simple rules, but have more difference brought by music as time goes.
  5. This time I did not use songs with human voice because of technical issues. I am not able to have time to change only the speed, not the pitch. But it would be better to have songs with human voice to allow the player get more interested in the game.

Evaluation and Improvements:

Overall the game is quite successful in using music as a tool to play with time by adding ends in the endless game. However, currently the quality of the music output is not quite successful. The disability to change only speed has brought the music to strange sound effect when pitch goes up. This has decrease the overall user experience of the game.

Compared to my initial goal, I think the core idea has been presented in the game. But I still need to have more features in the game, as my initial goal is to have a game that has the core of both Temple Run and Rhythm Master. I think one improvement would be letting the obstacle drop according to the music rhythm. Another approach would be adding more different obstacles and rewards in the game.

My very next step is to create a scene that looks much better. Currently the scene is simple and not so attractive.

In my interaction project I only used 4 songs to show the core concept of the game. However, eventually the game needs to get access to the whole music library.




Alicja’s Afloat Documentation

In my capstone project, which I titled Afloat, I explored the relation between visuals and sound. It was composed of three TV Screens, two webcams, one video and two soundtracks, one of which was a poetic travelogue and the other a memoir of a relationship.

My inspiration for the project included experimental films like Chris Marker’s Sans Soleil and Chantal Akerman’s News From Home, as well as the works of some video artists, such as W.A.N.T // WE ARE NOT THEM by Atif Ahmad, Cell by James Alliban and Keiichi Matsuda, and China Town by Lucy Raven. As I am fascinated with the medium of film, I wondered whether letting my audience intreact with the three screens, and in this way shaping the narrative, would make their experience of my work more personal.

I started the making of Afloat by amassing my footage. I mostly used videos taken in Nicaragua, Argentina and Chile last year, but I also incorporated some shots from Austria, Shanghai and New York. As I was filming, I was also writing two scripts, one being a monologue detailing my travels in South America, and the other a dialogue unveiling the end of a relationship. I took my time redrafting them, so that the differing stories can fit the same set of visuals. Once I had them ready, I asked my sister, Ola Jader, to record the two soundtracks, one on her own, and the other with a friend of hers, Jordan Brancker. I considered using a different person’s voice for the monologue, but I decided that the thematic links between the two soundtracks were strong and would benefit from highlighting by using the same voice.

At the editing stage, I spent a lot of time color correcting and synchronizing the separate soundtracks with the images. I also added background sound, that I either took from other videos or recorded (like the water flowing under the shower). I am quite happy with the final works, even though I would like to replace some shots with new footage in the future and perhaps re-record the voiceover as well, as at the moment it appears a little unbalanced, and some parts of it differ significantly from each other.

When it comes to the technological side of my project, at the beginning I considered using Kinect, but in the end I decided to work with OpenCV thanks to the advice of Tyler. This Processing Library proved to be quite easy to use, especially since I could consult the open-source code found online (especially this one by ManaXmizery). I really liked the fact that it only recognizes faces looking straight at the webcam, because it let me program the sound to only play when the viewer was actually facing the screen, and to stop when they turned away. Here is the code I drafted and used during the final presentation:

import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import processing.sound.*;
SoundFile file;
Movie myMovie;

Capture video;
OpenCV opencv;

void setup() {
myMovie = new Movie(this, “capfinvid.mov”);

video = new Capture(this, 640/2, 480/2);
opencv = new OpenCV(this, 640/2, 480/2);


file = new SoundFile(this, “dialocapfin.mp3”);

void draw() {
opencv.loadImage(video); //loading the camera video
image(myMovie, 0, 0);
Rectangle[] faces = opencv.detect();

if (faces.length>0) {
} else {
file.amp (0.0);
void movieEvent(Movie m) {
void captureEvent(Capture c) {

Between the different screens, I only changed the name of the recording in the new SoundFile command line or deleted the soundtrack altogether, since the first screen contained no sound. Once I had the code ready, I set up the 3 TVs in a room, the first one facing the entrance, and the other two angled, and forming a sort of a triangle. It looked like this:


The idea was that the audience would first see the images without any sound, forming their own understanding of what they mean, and then, once she turns around, gets a chance to reconsider her interpretation by consulting the two other screens, offering two other narratives.

For the installation to work, I had to place two webcams on top of the two TVs that included the soundtrack, and connect all 3 screens to computers. A technological problem I faced already at this point was how to run the sketches so that the visuals are in sync between the three TVs. I ended up using two wireless mouses to be able to quickly turn all of the Processing sketches on one after another, which at least made the beginning of the video to run in a pretty synchronized way, though by the end the work the images on the TVs differed significantly, because, as it turned out, I used computers with disparate processing power. As a result, I had to face sync issues between the visuals and the sound as well, which I tried to fix by increasing the movie speed, but it worked on one of the computers only. Another method I explored was, instead of incorporating the sound and video separately, maybe it would have been better to just mute the video volume, but this way did not work for me for some reason.

The last, but also extremely significant issue my project suffered from were the problems with running OpenCV. While it worked perfectly for me, it was not as good at recognizing other people’s faces, which puzzled me. Turning the light on in the room to an extent helped with this problem, because in this way more information was supplied to the webcam, but also to a certain extent inhibited the act of watching the installation. I wonder if there is a different way to remediate it or whether I should look into other face recognition technologies.

As users tested my project, I realized how different the approach and timing of each person was, and that made me happy, because that meant, as I had hoped, that for each viewer the experience of my installation was at least slightly different. Even though I still have lots of improvements to make, I think that I reached my goal in the project, letting my audience experiment with visuals and sound and create their individual understandings of my work.

Here are the two videos with separated soundtracks:

cardboard shikumen, revision 2, documentation




cardboard shikumen, revision 2


Cardboard Shikumen, revision 2 is my capstone project, chosen after months of deliberation over the topic of the project. I started the original project in 2015, after learning that the neighbourhood of my family house would be demolished by the end of 2015 by the local government to make space for some fancy new malls or condos or equivalent. Most buildings in the neighbourhood were Shikumen houses built in the 1920s. This prompted me to think – is there anything I could do to save this part of my personal memory? And – to some degree – the collective memory of the Shikumen architecture of Shanghai in general?

Named after its iconic stone arch that encloses the main gate to the house, Shikumen is an architectural style that is unique to Shanghai, borrowing influences from both British row houses and traditional Chinese residences. During its apex in the late 40s, the Shikumen architecture accounted for more than 60 percent of residential buildings in Shanghai, and Shikumen houses witnessed many defining cultural and political moments that shaped the trajectory of China in the 20th century. However, Due to the rising land prices, the government and real estate companies in recent years have been demolishing Shikumen neighbourhoods in central Shanghai to make space for new developments, which provoked much controversy over the questionable demolition and compensation practises and the loss of history in disappearance of these neighbourhoods.

My plan was to create a virtual reality replica of my Shikumen neighbourhood, before the physical thing gets demolished. The project should function like Google Street View and present the neigbourhood as a series of 360 panorama image and video footages. It should also give the user the ability to view the scene in virtual reality headsets. Considering the fact that many of the Shikumen residents who might benefit from my project would usually be inept with technology, and unable or unwilling to afford dedicated VR hardware, I try to make sure the resulting project would be accessible on consumer-grade hardware and would not require special technical expertise to operate. In addition, I try to make the capture workflow and my virtual reality presentation software itself as simple and low-cost as possible, so that the software can also at as a simple framework for people who are looking to do similar documentation work in their own neighbourhoods. 

In the summer of 2015, I started Cardboard Shikumen as an independent research project sponsored by the DURF fund at NYU Shanghai. The first version of Cardboard Shikumen functioned mostly as a web-based 360 panorama viewer and featured panorama photo and video footages of the neighbourhood of my family house in the final days before most of the residents were evicted. The project at that time looked like this:


After finishing the project, I spent my next two semesters in Buenos Aires and New York City. There were still additional features I outlined in my first proposal that I did not have the time to implement, and the then-existing implementation of the project had a lot left to be improved, due to the limited timeframe. In addition, since my departure in 2015 there have been significant changes happening in the neighbourhood. The planned demolition by the end of 2015 did not happen. Instead, the neighbourhood was converted into a temporary film set for a TV series set in the cultural revolution, and the crew had somehow “refurbished” the streets in order to create the retro feel of 1960s, which is itself ironic considering how they are fabricating history on top of real history that was soon to be demolished. Meanwhile, some parts of the neighbourhood was already demolished.


(fabricating history)

I realise these changes actually form a story that can be told through the medium of 360 panorama streetscape.

Considering these factors, I decide to revise my original Cardboard Shikumen project as my capstone project.

For the revision, I planned to improve the overall performance of the original project and implement two new features: the neighbourhood map, and the ability to see changes happened in the past 2 years since the original project was done. I consulted Professor Roopa and Professor Sakar Pudasaini to make sure the additions are significant enough to be counted towards a capstone project.

Making neighourhood map The first thing I chose to add is a map of the neighbourhood. It would allow the user to navigate between different points in the neighbourhood quickly and adds a spatial dimension to the existing project. I used leaflet.js to present the map as it is an extensively documented project with excellent cross platform support. 


My original plan was to superimpose a map of the neighbourhood in GeoJSON over a layer of OpenStreetMaps. The plan did not work out, because the geographic features of my neighbourhood looked bad when mapped in a geographically-(relatively) correct manner, especially when given the small on-screen space.


So in the end I elected to make a map that can better expose the topology of the lanes of the neighbourhood with the adjacent streets, even when zoomed in. I also built a custom coordinate system on top of the map and pinned my footages on my map. User’s position is represented as a red dot on the map, and users can also click on the map to teleport themselves to another location, much like in the original google street view.



giphy (4)
Implementing Time Travel and Evolution of the hardware. As discussed before, I realise one of the biggest unrealised potentials of the project is actually to show the process of the demolition over time. Since the first batch of footages were taken in 2015, changes have happened at the neighbourhood as the residents get evicted from the neighbourhood.


But before going down into the neighbourhood, I attempted to improve my camera first. Back in 2015, due to the lack of affordable 360 panorama camera options, I built a 360 camera consisting 6 Xiaomi Yi action cameras (cheap GoPro knockoff) and a 3d-printed camera rig ordered from Taobao.

the camera set up

The rig was hard to operate, as the shutters of the 6 cameras had to be pressed one by one, and the photos captured by the cameras had to be manually stitched into 360 panorama photos. In 2016 I briefly experimented with Ricoh Theta, which takes 360 photos at one click and does not require any post processing, but the downside is that it is very hard for the photographer to hide himself from the photo when shooting – as the camera captures everything around itself.

Learning from previous experiences, I tried to make a helmet rig for SAMSUNG GEAR 360 for use with my capstone. The rig was made from an ordinary safety helmet with a camera holder. I hoped this helmet rig could help me solve the problem of I being accidentally shot in the scene.



(Luis using the helmet rig to record an IMA studio tour)

However, I ended up ditching the helmet as I found the helmet itself was occupying too much portion in the resulting image. In the end I settled on using a tripod + Samsung Gear 360 to capture the footages. I implemented a time switch in the user interface that enables the user to switch between footages of the two years easily.



Evolution of the tech stack. When I started to the project in 2015, Unity was the go-to tool for building virtual reality projects. The game engine was provided free of charge to developers, and both Oculus and Cardboard released official SDKs to be used with Unity. However, I found the engine was too powerful for my purposes. In addition, Unity VR applications usually require the user to download and install the application itself before being able to use it, which is already a big technical hurdle for the Shikumen residents, not to mention actually acquiring an Oculus for around USD 600.

My goals of maximising accessibility and mimimising cost taken into consideration, I decided to use WebVR to build my project. WebVR is a proposed standard to introduce VR capabilities to web pages. To access WebVR content, users would only have to enter a URL in their browser, in the same way as they access a normal website. The user therefore would not have to be reeducated to access VR content, and they already possess the right hardware to use VR content – their computers and smartphones. The drawback is that the existing WebVR implementation is still in its primitive stage. Only experimental builds of major browsers supported WebVR natively. Normal builds of browsers would require a special JavaScript compatibility layer (a polyfill) to delegate WebVR to the existing WebGL functions already supported by major browsers.

I built the prototypes of Cardboard Shikumen in 2015 using webvr-polyfill, a basic JavaScript implementation of WebVR based on three.js. In the revision, I switched to aframe, a WebVR framework that supports defining a VR scene using HTML, which allowed me to structure the code in a more logical and easy to maintain manner.


In addition to making under-the-hood adjustments in loading mechanisms, I also implemented a hash indicating the current location of the user in URL, enabling users to quickly navigate to any point in the neighbourhood via URL, as well as sharing a specific scene to others.


Overall i think the changes and improvements i’ve made has made the project itself more interesting and fulfill the goals I’ve set for myself earlier this semester, after much back and forth on determining the capstone idea. That being said, there are certainly places for improvement.

Technical limitations. WebVR is still an highly experimental technology. Developing this sort of content on the web means the developer would be constrained by a lot of factors, including the safety restrictions on the browsers and the incomplete implementations of WebVR on the javascript frameworks. In my project, for example, the performance of the website downgrades as the user spends more time exploring the neighbourhoods. That is because the a-frame framework currently does not have a proper texture unloading mechanism, which causes the textures to cog the memory. A proper way to solve this requires hacking into the underlying three.js infrastructure, which I could not deliver in a short timeframe. I also run into issues with switching between video and image textures, which is an old issue I have not been able to properly fix since 2015. The issue is concerned with multiple parties, including the browser video playback policies, texture management with a-frame, and a-frame’s interface with the underlying three.js.

Fortunately, an increasing number of developers are participating in the development of web-based 3D and VR content, and with each revision new functionalities and fixes are introduced. 

Limitations and biases in documentation design. As a former resident of the neighbourhood, I certainly have my own bias when documenting. Due to the limitations in time in 2015, for example, I have left some smaller alleyways undocumented. If I happened to live in one of the smaller alleyways, I might have traded off other areas I consider ‘insignificant’ to document. I imagine such bias would still exist, even if more time and resources are allocated to this project.

From the perspective of the user, the current iteration of my project essentially offers the user the gaze of the neighbourhood as passer-by. But after all what I am documenting is a social phenomena. The footages may be able to tell how the place looked like at a certain point in history, but without the actual participation of humans this project risks decontextualising itself from its historical and social backgrounds. This is something I would like to improve in future revisions of the project. 

Next steps. Due to the project’s personal significance to me, I would like to make cardboard shikumen a long term project, documenting the changes of the neighbourhood over the next few years. As VR and WebVR technologies mature, I might be able to overcome the technical and design limitations outlined above.

Extra documentations. The original slides used at the presentation can be found here.

Acknowledgements. Cardboard Shikumen, revision 2. An independent research project sponsored by the Dean’s Undergraduate Research Fund of New York University Shanghai. Revised to fulfill the requirements of Interactive Media Arts capstone project. 



Capstone Documentation: Jingyi Sun

Graphics PDF download: LINK

Presentation slides: LINK

Video demonstration: (the speed is fastened)


My capstone is an interactive pop up book recording a brief and selective history of artificial lighting. I wanted to combine paper engineering methods with circuits, and I also wanted to learn more about how lights have developed throughout the years. Lights are incredibly power devices. However, the abundance of lights in our everyday life make them seem mundane, and I realized that although I have always been interested in lights, I knew relatively little about them.

Research process:

I started by looking at general timelines, and the decision of which lights to include was highly subjective. I picked lights the mechanisms of which I found interesting, or thought was important. Once I decided on which lights to include, I started by looking at general images of them online. My image references come from photos from collectors (generally with older lights, these images were really helpful in understanding the working of these lights), old photos (also mainly of older lights, especially street lights), and diagrams/cutaway views of lights (mostly of newer models of lights).

arcLamp1 97fcdf3b9e54c6e6fc64a019e4c2ebc6

(I made a slight mistake when referencing the top left photo in my presentation, I referred to it as a kerosene lamp, but it is actually a carbon arc lamp).


For text, I looked at encyclopedias (mostly the online version of encyclopedia britannica), news articles, as well as patents, and writings of the inventors themselves (interesting read here: http://www.brikbase.org/sites/default/files/ies_049.pdf). The Smithsonian also had a very helpful (albeit ugly) website containing valuable information: http://americanhistory.si.edu/lighting/

I tried to include some information about the way the lights worked, and key people and dates relating to the lighting.

Prototype and Preparation:

IMG_7615 IMG_7616 IMG_7617

I started out by drawing out the circuits I wanted on scratch paper, writing notes to myself as I went along.

For materials, I bought 400 gram paper, LED stickers, conductive ink, paint and tape. I also bought EL wire and the relative converters and cords, as well as batteries and a portable power bank. Although I originally intended to use conductive paint, the paint took a long time to dry, and was very messy. The ink did not work well at all, so for the most part I ended up using conductive tape as the major conductive material for the book.


IMG_7619 IMG_7643

IMG_7620 IMG_7646

IMG_7621 IMG_7622


I then did my first full prototype on very thin (mead composition notebook) paper with conductive tape and actual leds. During this process, my main goal was to test out the circuits, and the reliability of the conductive tape, as well as figuring out how to join the circuit with the pop up techniques I wanted to implement.


I also figured out during this stage that I could control all of the circuits with one battery by extending everything to the back of the pages. Once I figured out how to accomplish the circuits, and figured out if and how they worked, I moved on to printing.

To my surprise, printing was the one of the harder stages because I originally assumed that this could be done at the big printers available in school.


However, the printers did not take my paper (they were too thick), and when it did allow my paper to go through, the ink did not stick to the paper well enough.

IMG_7626 IMG_7627

(smudged lines with epson)

I then went to ATS, hoping they could help me print with the epson printers they had, but the quality of the print was not great, and the paper they offered was way too thin. I then turned to a print shop near school, and they did not allow customers to bring in their own paper, and they did not have black paper. In the end, I ended up using their 330 gram paper, which worked out well, and they printed a piece of “black” paper for me.


(correcting unfortunate typos)

I had to go back to the print shop several times to print out more materials because I either needed more copies, or found a typo and had to reprint. 

Putting it together:

I first laser cut holes running down the middle of the page, which would act as points for where I could sew my pages together.

I then went on to my second prototype, using the actual 330 gram paper and led stickers. I did not make a complete prototype this time, but testing out paper mechanisms and circuits as I moved along each page.

With some pages, everything worked out really well on the first try, (for example, the gas street light page, the carbon arc lamp page, fluorescent light page). With others, I had to reinforce the circuit, but was not that difficult either (for example, kerosene lamp).

IMG_7630 IMG_7631 IMG_7632

(developments of the neon page)

Some pages took more time, such as the neon light page, which was not difficult, but I had to wait overnight to see if the EL wire would conform to the shape of the spiral I wanted, and to be careful not to ruin the ink on the paper.


IMG_7638 IMG_7639

Pages where there are pop ups included, such as the lightbulb page, the HPS lamp page, and the led page, took many many tests to figure out.

IMG_7634 IMG_7636IMG_7635

Although I am pretty happy overall with the effect created by the led stickers, their brightness limits the area of the paper that shows light, and so in some cases, such as neon and fluorescents, I substituted EL wire to have a better consistency of the light.

Next Steps:

Overall, I accomplished what I wanted to do, and I learned a lot about the topic in the process. However, there is always room for improvement, and in this case, I would like to have more precise laser cuts, since I did most of the cutting by hand as I was figuring out the specifics at the same time, and it was faster to just measure and cut by hand. I would like to improve the cover of the book, as well as the binding. I want to figure out how to further decrease bulkiness of the book, especially eliminating the bulkiness of the EL wire usb cord.

I asked some friends to test out interactions by placing the pages in front of them without any instructions, and most of the time they played with the pages in the correct and straightforward way, but sometimes it was hard for them to understand what to do, so I adjusted some circuits accordingly. I would like to do some more user testing, and improve on the interactions as well.