Bingling Du | Research Essay Draft (Greenspan)

Literature Review of Lighting Technology in Dance Performance

(Unfinished Introduction-history of lighting devices and the first record of lighting devices being used in dance performances.)

During the first half of the twentieth century stage lighting underwent a vital transition. Electric lighting instruments replaced gas and oil lamps, making possible a revolution of theatrical realism on stage. Light from a non-flammable source could come from overhead to focus downward realistically as from the sun or moon or stars. Designers hung their instruments in the grid and used them in conjunction with general lighting sources, such as footlights and striplights. By the forties, Stanley McCandless of Yale University had analyzed and documented a technical approach to lighting and the McCandless light plot, his system of “warm and cool” overhead lighting for each acting area, together with his other schema was almost universally adopted by both legitimate and college stages. (Earliest application of lighting system in dance stage setting – beginning of everything)

Definition on what’s going to be discussed: Use of lighting tech in dance performances.


First stage: simple background. Providing light & basic background.

Related projects: Lighting and dance,; History of dance: an interactive arts approach, 7th chapter, on the way to Shanghai.


Other stages: From non-interactive into interactive

Current research assets & projects review:

  1. Nuance: Dancing with Light

Artist: Christopher Jobson      Time: September, 2013

Description: In the performance, the dancer battles with various geometric forms of light that launch and morph as part of a carefully choreographed dance that marries human motion with motion graphics.

Stage: Mid Interactive-Interactive feature presented through dancer’s action to lights


  1. Light Body

Artist: Lia Chavez      Time: July, 2016

Description: Referring to the practice in which Tibetan Buddhist meditation, dancers use lighting devices to express the process of turning old body to a new figure. Motional figure formed to create special environment.

Stage: Post-staging-Pre-set environment & Storytelling


  1. Dancing With Light

Artist: Eryc Taylor     Time: March, 2014

Description: Light projection. A program called “skeleton tracking” is used. It follows the motion of dancers and uses data stored on a laptop to generate arresting interactive graphics.

Stage: Early Interactive-Lighting reacting to dancer’s move


  1. Quixotic Fusion

Artist: / (Guest from TED 2012)    Time: June, 2012

Description: Pre-set, complete, 3D-generated light projection in order to provide more immersive storytelling environment to the audience.

Stage: Post-staging-Pre-set environment & Storytelling


  1. Pleiades

Artist: Enra  Time: January, 2014

Description: Dancers show “interaction” between light objects on background.

Stage: Post-staging-Pre-set environment & Storytelling


  1. Ondulation

Artist: U-Machine     Time: March, 2014

Description: Background + partial lights. Random-generated lighting (How to value the design of “random”?). Fixed background definitely help the storytelling process.

Stage: ? Undecided. Can be Early interactive.


  1. Hakanaï

Artist: Adrien M / Claire B      Time: March, 2015

Description: “We have developed a software—since 2006—about the motion of objects, based on physics models.” “And we think that mixing sensors and human interaction, like puppetry, is a good way to make things more lively… more, well, sensitive…” The performance

Stage: Late Interactive-Dancer’s move and lighting mechanics interact to each other.


  1. Tron Inspired

Artist: the Wrecking Orchestra       Time: March, 2012

Description: Wearable lighting suits to mock up unreal figures and storytelling environments.

Stage: Post-staging-Pre-set environment & Storytelling


  1. Pixel

Artist: Adrien M / Claire B      Time: December, 2014

Description: A dance performance combining contemporary dance and sophisticated 3-D projection mapping. Dancers interact with the illuminations, swirling suspended motes of light with their limbs, riding small wireframe hills across the stage, and using an umbrella to ward off a cascade of shining specks. A controlled hand carves a path through a wall of white dots, and a bright hoop makes a hole in spacetime as the light projection reacts to the physical movements of the dancers.

Stage: Late Interactive-Dancer’s move and lighting mechanics interact to each other.


  1. Lighting Choreographer

Artist: Lighting Choreographer     Time: September, 2010

Description: A system to expand the expressive capability of human body by lighting. It makes light effects on the user’s body synchronized with motion and sound, focusing on the viewing point that the produced effects recursively influence the choreographer.

Stage: ? Post-staging-Pre-set environment & Storytelling. But lighting system being the key actor in the performance, instead of the dancer. Fixed actor + moving light vs traditional type.

Code Academy Completion

I have finally managed to finished the code academy for HTML & CSS tutorial. Although I think it is a great way to familiarize yourself with the HTML and CSS, I had a lot of difficulty following up since I kept using Java instead of Javascript and kept on making tiny mistake that would mess up the entire thing.

Screen Shot 2017-03-08 at 4.50.44 PM

Capstone Progress

3×3: An Interactive Sound Art Installation

Sound Design for Nine Speakers:

1-water drop

2-low frequency


4-electric sound






Capstone Progress Report (Vasudevan)

Field Trip- Photography.

Last weekend, I went to the Putuo area (Jiangning Road, Jade Buddha Temple)to take photos and record the sound. The comparisons between the old and the modern; the rich and the poor;  the scenic  and the residence were very obvious. But I met some problems when I recording the sound and taking photos:

  • The phone is not the proper device to record the sound since many noises cannot be ruled out.
  • The corner is very hard to display, and the blocks (cluster) with several corners might be a good idea.

Part of pictures here (since the files are too big)

DenglongJPG dinglou miaolou

zufang chai miaopolou



I made a prototype (html, CSS, Javascript)  with sublime 2 , mapbox and leaflet for my website frame work.  But I talked with professor Greenberg about my project, he suggested P5.js might be a more convenient and straightforward way. I had experience map creation in P5.js  but I never used P5.js to construct a website. And this week, I will work on my Geejson file, and create a frame work based on P5.js.


My Geojson




I have an appointment with Professor Bruce Carrol on my research about the relationship between aesthetic in interface design and user preference.

<!DOCTYPE html>
<html >
  <meta charset="UTF-8">
  <title>Shanghai Corners</title>
  <link rel='stylesheet prefetch' href=''>

      <link rel="stylesheet" href="css/style.css">


  <div class="map" id="map"></div>
  <figcaption>Explore the sound, photos and the city</figcaption>
  <script src=''></script>

    <script src="js/index.js"></script>

body {
  background-color: #f2a4a5;

figure {
  height: 100vh;
  margin: 0;
  width: 100%;

figcaption {
  font: 1em/1.5 avenir, Gabriola;
  padding: 1em;
  text-align: center;

figure .map {
  height: 80vh;

.map .leaflet-popup-content-wrapper,
.map .leaflet-control-layers, 
.map .leaflet-bar,
.map .leaflet-marker-pane img {
  border: 0;
  border-radius: 0;
  box-shadow: 0 .125em .25em .125em hsla(218, 2%, 10%, .25); /* subtle shadows */

.map .map-legends, 
.map .map-tooltip,
.map .leaflet-control-layers, 
.map .leaflet-bar {
  border-radius: 0;

var geojson = {
  "type": "FeatureCollection",
  "features": [
      "type": "Feature",
      "properties": {
        "title": "Typical DC crosswalk",
        "icon": {
          "iconUrl": "",
          "iconSize": [84, 56],
          "iconAnchor": [42, 56],
          "popupAnchor": [0, -56]
      "geometry": {
        "type": "Point",
        "coordinates": [
      "type": "Feature",
      "properties": {
        "title": "Mount Pleasant stone walls",
        "icon": {
          "iconUrl": "",
          "iconSize": [84, 56],
          "iconAnchor": [42, 56],
          "popupAnchor": [0, -56]
      "geometry": {
        "type": "Point",
        "coordinates": [
      "type": "Feature",
      "properties": {
        "title": "Orange leaves at sunset",
        "icon": {
          "iconUrl": "",
          "iconSize": [84, 56],
          "iconAnchor": [42, 56],
          "popupAnchor": [0, -56]
      "geometry": {
        "type": "Point",
        "coordinates": [
      "type": "Feature",
      "properties": {
        "title": "Dry Fall Leaves",
        "icon": {
          "iconUrl": "",
          "iconSize": [84, 56],
          "iconAnchor": [42, 56],
          "popupAnchor": [0, -56]
      "geometry": {
        "type": "Point",
        "coordinates": [

var map ='map', 'opattison.ha5cm8b7', {
  tileLayer: {
    detectRetina: true
  .setView([38.929, -77.042], 16);

map.featureLayer.on('layeradd', function(e) {
  var marker = e.layer,
        feature = marker.feature;


进柜-Tyler Rhorick Capstone Progress

Capstone Documentation:


Technical Progress:

As for the technological development of my project, I have built a simple circuit that is able to read RFID codes and transfer that information into Serial communication. This process took longer than expected after I found out that the RFID cards that are in IMA do not work with our current RFID readers. After I talked about this in my one on one interview with Scott, he ordered some new ones that would be capable for his class and has told me that I will be able to prototype with this. My next stage at the moment will be learning how to use serial information in Max to activate different video clips.



Field Research:
As far as field research goes, this has been the most productive part of the process thus far. To get myself more familiar with the community and more engrained to build trust, I have been attending LGBTQ themed events around the city of Shanghai. Some examples of these events have been ORGASM, a film event at the night club Elevator, and different themed nights at the night club Lucca.  These contacts have been able to gain me access into different LGBTQ WeChat groups that I have been using to gain potential interviewees. In addition to this, I have also interviewed NYU Shanghai faculty Lixian Cui, who has just started research about the psychological effects of being an LGBT youth in China.

As for interviews, this process has been slower than expected, since I got sick and lost my voice for a couple of days, but I currently have my first filmed interview scheduled for tomorrow and a pre-interview scheduled for tonight. I am hoping that this will pick up steam with the more people that I interview.




I applied for budgeting from the Dean’s Budget proposal, but have yet to hear back. I emailed the office to see when the budget might become available, but I have not received any word back, so I’ve decided that if there is no word that I will begin ordering next week to make sure I have materials that I can use for my final project.

Last week, I was assigned a space, so I still need to look at the space and measure it out, so I can plan the most productive installation.


As far as the paper goes, I have collected a good deal of research thus far that I have used to make my initial question list. What I have learned from my research is that for China, family remains to be one of the biggest issues with the older generation accepting the LGBTQ community. Because of this, I have decided to make an integral part of my questions about family and familial acceptance. I feel like this is also topical with the “closet” idea because it is estimated that more than 80% of LGBTQ men in China are closeted to their families.


Capstone Progress Report (Vasudevan)

I talked with many IMA fellow in order to determine what technology I should use for Capstone. Actually, the technology I want to use has been changed many times. At first AJ suggested me to use Raspberry Pi to connect self-made keyboard with Arduino. Then I talked to other fellow and found out that Raspberry Pi is not a must. Instead, it could be an add-up to my project. To control keys, Makey Makey might be a good choice. So I tried to use Makey Makey first. However, I found out that Makey Makey only allows 18 keys in total, which was not enough for me. After discussing with Luis, we decide to use Arduino Mega which has 53 pins in total.


I ordered a 8×6 keyboard, which has 48 in total. I have already decided the layout of these buttons. I set up part of my project. I planned to use the CPB board that goes with the keyboard. Then I found this might be to complicated, so I decide to only use the frame. These mechanical keys can be seen as push button. So I can use Arduino and Processing to assign each button a value. Below attached is the testing code for pressing one key.

1869273564 1283551444 1092220334 550364525

For my paper, after talking with Sakar, I switched the direction a little bit. I wasn’t so clear what a literature review looks like. And Sakar showed me some examples of lit review, which was really helpful to me. Below are the resources I found. I wanted to write the intentional design that designers make to encourage overspending, but I found it might be complicated. So I want to start from the two intentional design I saw on Alipay (convenience of scanning and transparency) and then find resources that are related to these two.

Payoff Hidden Logic Shapes Motivations

Review of thinking fast and slow

The Impact of Contactless Payment on Spending



Addiction by Design Machine Gambling Vegas



 Turns on and off a light emitting diode(LED) connected to digital
 pin 13, when pressing a pushbutton attached to pin 2.

 The circuit:
 * LED attached from pin 13 to ground
 * pushbutton attached to pin 2 from +5V
 * 10K resistor attached to pin 2 from ground

 * Note: on most Arduinos there is already an LED on the board
 attached to pin 13.

 created 2005
 by DojoDave <>
 modified 30 Aug 2011
 by Tom Igoe

 This example code is in the public domain.

// constants won't change. They're used here to
// set pin numbers:
int buttonPin = 8;     // the number of the pushbutton pin

// variables will change:
int buttonState = 0;         // variable for reading the pushbutton status

void setup() {
  // initialize the pushbutton pin as an input:
  pinMode(buttonPin, INPUT);

void loop() {
  // read the state of the pushbutton value:
  buttonState = digitalRead(buttonPin);

  // check if the pushbutton is pressed.
  // if it is, the buttonState is HIGH:
  if (buttonState == 1) {
    // turn LED on:
  } else {

Alicja’s Capstone Update

Here are the updates on my Capstone project and essay so far:

  1. Technology

    I figured one of the most important parts of my project would be to get my code working, so I decided to work on that area for last week’s assignment. I managed to set up a pretty reliable program thanks to Tyler’s suggestion that I use OpenCV.

    First, I downloaded the OpenCV library and found a simple sketch drawing rectangles around the detected faces here:

    Once I got that set up, I started experimenting with adding sound – so I downloaded the Sound library, imported a soundfile and used the file.amp() command to control whether it’s heard or not depending on whether a face is detected or not.

    The last step was just adding the video to the program. I tested it last week in class and I was really happy with how well it worked. Now, I just need to experiment with an external camera and screen, because eventually I want to run my project on TVs with webcams mounted on top of them.

    Here is my Processing sketch:

    import gab.opencv.*;
    import java.awt.*;
    import processing.sound.*;
    SoundFile file;
    Movie myMovie;

    Capture video;
    OpenCV opencv;

    void setup() {
    myMovie = new Movie(this, “VideoDemo.mp4”);

    video = new Capture(this, 640/2, 480/2);
    opencv = new OpenCV(this, 640/2, 480/2);


    file = new SoundFile(this, “Poem.mp3”);

    void draw() {
    opencv.loadImage(video); //loading the camera video
    image(myMovie, 0, 54);
    //image(video, 0, 0 );

    // noFill();
    //stroke(0, 255, 0);
    Rectangle[] faces = opencv.detect();

    // for (int i = 0; i < faces.length; i++) {
    // println(faces[i].x + “,” + faces[i].y);
    // rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    // }

    if (faces.length>0) {
    } else {
    file.amp (0.0);
    void movieEvent(Movie m) {;
    void captureEvent(Capture c) {;

  2. Video

    So far, I have assembled all of my footage and color-corrected it. I have also added transitions, but I still need to work on the timing. I have recorded the two voiceovers on my own for now, to just use in the editing process, but I have reached out to potential voice actors and I am waiting for their response.

  3. Essay:

    I have managed to decide on the format of my paper (a trend piece) and the final topic: Where do we draw the line between interactive and conventional cinema?

    I am planning to do more research, and have ordered a couple books from the NYU New York library to help me move forward, but for now I have the following trend categories:

    Divergent Pathways (the audience makes a choice; two or more choices; not necessarily a different outcome; specific points of divergence; Kinoautomat (Hales, 210), Computer and Blues by the Streets; Goodbye Cruel World (Hales, 211); Mr. Payback, Ride for Your Life, I’m Your Man)

    Collective Work (users contribute to the final work, it is modified by the constant submission of new material, The Johnny Cash Project)

    Individualized Narrative (meant to be experienced by everyone differently, the narrative or other aspects personalized based on the viewer, Cinelabyrinth, Wilderness Downtown, Oculus Rift, my own project)

    Cut VR scenescapes (scenescapes within films that allow for greater immersion, but unmodifiable – so not interactive actually?



Capstone Progress, Ellie (Vasudevan)

Interactive project:

1: Research on technology is done. What I ended up using: html, css, javascript, node js, express js

2) Functional specifications are done: I finalized the layout of the platform and user interaction

3) Started to work on programming: refreshed my memories on html, css, javascript (the latter is more like learning it again). Started to learn about how to use terminal, node.js and exress.js and it took a lot of time

4) Started to think about character design

Here is the most basic prototype of the platform made with html, css and javascript with no design and gifs taken from internet:

Research paper:

1)Wrote down my purpose statement:

Parents and teachers will know how social relationships with media characters can be useful in teaching children. I will show how components of these relationships such as attachment and character personification can make educational tools more effective. I will also elaborate on the factors that may affect how kids become attracted to virtual characters.

2) Thought about keywords: interactive media; children’s media; virtual characters; attachment; parasocial relationships

3) I have red and analyzed two readings:

In Exploring the Relationships between Affective Character Design and Interactive Systems Kellen R. Maicher

Children’s future parasocial relationships with media characters: the age of intelligent characters Kaitlin L. Brunick , Marisa M. Putnam , Lauren E. McGarry, Melissa N. Richards and Sandra L. Calvert