The DigiSteenbeck – Interactions Lab Jeffrey Han

The Problem

The the modern day of technology, it is very difficult to keep people’s attention.  It takes more and more effort to make people interested with the world.  Film is something that is suffering from this.  People are going to the cinema less frequently because the movie going experience is just not interesting enough anymore.  The film industry is resulting to cheap tricks like 4DX or comfy seats to keep viewers in the theater.  At the same time, with VR and AR technology highly rising, simply watching a film or video is not extraordinary enough anymore.  At the same time, the process of editing a film has devolved from cutting a physical film to simply pressing buttons and clicking a mouse.  Movie making and movie watching is supposed to be a fun and engaging experience, but this is slowly started to change for the worse.

The Solution

In order to revolutionize the movie experience, we here at JRJH Productions believe the key is to make it interactive.  The viewers can interactive with the film or video itself so they can choose what they see on screen.  We want to creative a device in which the viewer can physically interact with the film in which they can change angles, focal length, or even the location, as if they are the director of the film.  This way, the movie experience can be more engaging.  At the same time, this device can also be used for editing, so the editor can have a more hands on experience.  Sort of like VJing (DJing but with video).

The Inspiration

To Design this device, we looked at the age of 35mm film and the flatbed editing system.  Back when film was not edited on a computer, people used a flatbed in which they must physically cut film.  Coincidentally, many also considered this the “golden age” of Hollywood.  A famous company that made flatbeds is called “Steenbeck”.

For this project, we want to take the old classic design of the Steenbeck and make it modern.  So we present you with — The DigiSteenbeck.

The Design

We used Adobe Illustrator to design the DigiSteenbeck, which will then be laser cut with thick 3mm cardboard.  The design uses three “film rolls” as controls for the device.  Each of them will control pan, tilt, and zoom respectively.  There will also be a “film strip” in the middle so that the user can place different film clips into the timeline to affect what will show up on screen.  We wanted the device to look modern yet retro and classic at the same time.

The Production

When making the device, we encountered many problems, the first being laser cutting.  When cutting the board the first time, the laser cutter was cutting on 5mm thick board while on a 3mm setting.  Joel and I, nor the IMA fellow, realized this until we ran the cut twice.  The smell of the laser cut was also horrible to bare.

Later that day, we had our second laser cutting attempt.  This time we paid extra attention to the settings and eventually got it right.  However, the pieces were still not completely cut through so we had to tear some of them off the board.

After cutting the schematics out, we had to put the board together.  This was a fairly straightforward process.

The Medium

For this specific project, we decided to use dance as our film.  Dance is such an elegant form of art that really must be appreciated through different angles and locations to really be able to see its complete beauty.  We collaborated with three pairs of professional dancers who performed the same piece in three iconic locations in Shanghai — EXPO Park, The Bund, and BingJiang Avenue.  As filmmakers, we wanted the cinematography to be grand.  We shot different versions with different angles, but in the end, we decided to only use the extreme wide shot for the DigiSteenbeck, so the viewer can choose to zoom and to see the details.

The Circuit & Coding

The circuit design of this project is actually pretty simple.  We had three potentiometers which operated separately, and a RFID tag to control the different videos.  Coding the device was a lot more complicated.  We quickly realized that Processing is a horrible way of loading video.  We shot our dance films in beautiful 4K, but had to compress them into tiny 720p 10mb files in order for Processing to actually load them.  This was a huge disappointment, but it was something we cannot change anymore.

Coding wise, the most annoying part was to figure out the boundaries of the zoom pan and tilt.  Because we had to zoom in with a potentiometer, the boundaries of the film will continue changing.  Therefore, we cannot map our values to a single number, and instead must map it to a math equation.  Joel and I got two of our math major friends to figure out this equation for us.  But even after we got the equation, we somehow had to implement it into the code.  This was another impossible challenge for us.  Joel and I took turns sitting down the Luis to try and figure out this equation.  It took us three hours, and still there was no perfect solution.  In the end, the system works perfectly at 2x zoom, and okay-ish at 1.3 to 1.9.  However there is this threshold at 1.1-1.3 in which it still glitches.

The Final Device

After weeks of hard work, we finally finished the DigiSteenbeck.

This elegant device attracted many people’s attention in the IMA, being praised for its design.

Initial Responses & New Changes

After some user testing, we received really positive feedback for our project.  However, some people said that this is still a very singular experience.  Only one person and play with the DigiSteenbeck at once.  This gave us the idea to add more functionality to it by adding keyboard functions.  This way, one person can use the DigiSteenbeck, and another user can use the keyboard.  This will make the experience a collaborative process, something that is highly emphasized in filmmaking.  We added functions such as color filters, slow motion, and music change that can all be activated on the keyboard.

When we tested out this version of the project, people loved it even more, but said that it would be cool to have a second device to play with instead of simply using the keyboard.  We couldn’t agree more, but because of time constraints, we couldn’t put together and laser cut another device, so we kept the keyboard.

But after this, we realized that we must have a set of instructions so the user knows what is going on.

Presentation Responses

Joel and I were really nervous before the final presentation because we had no idea what people would have thought.  However, the reaction and comments were really positive and it gave use a big relief.  I loved Nemrah’s comment that this could actually be something she would see in a museum exhibit.  It felt good to get this recognition considering we are filmmakers and not programmers or designers.

IMA Show and User Testing

During the IMA Show, we put our stand right next to the door, and thankfully this attracted huge crowds at our station.  The audiences were very intrigued by our invention, and many even wanted to take a picture of it.  This was a special feeling because when someone wants to take a picture of this artwork, it means we are doing something right.

However after many visits to our stand, we realized a trend of problems.  The first one being we only wrote our instructions in English.  We forgot the fact that we are in Shanghai and many visitors may only speak and read Chinese.  So many of the Chinese viewers had no idea was going and I had to verbally explain to them what to do.  Our labels Tilt, Pan, and Zoom were also only in English.  But even so, we did not realize that might have been film jargon.  Even some English speakers who understood the instructions did not know what the different wheels were supposed to do.

Another problem was with loading in the film clips.  On the film strip, we had three frames.  The middle frame was empty and had the RFID under it.  We thought and empty frame would be very straightforward and mean that this is where they should load the clip, but people still tried loading the clips in the other frames, which already has images on them.

However, other than that, I am really pleased with the project.  I do believe the functionality can be clearer, and if I were to do this project again, I will make sure that is the case.

But in the end, I just want to thank Antonius for this great semester, and all of the IMA Fellows for their support!

//DIGISTEENBECK PROCESSING CODE
import processing.serial.*;
import processing.video.*;
import processing.sound.*;

String myString = null;
Serial myPort;
//char valueFromArduino;
SoundFile m1, m2, m3;
Movie Ch1, Ch2, Ch3;
Movie Chapter = Ch1;
PImage helpimg;

int x, y, w, h, vid, counter;
float zoom, pan, tilt;

int NUM_OF_VALUES = 4;   /** YOU MUST CHANGE THIS ACCORDING TO YOUR PROJECT **/
int[] sensorValues;      /** this array stores values from Arduino **/

void setup() {
  background(0);
  helpimg = loadImage("DigiSteenbeck Instructions.jpeg");
  imageMode(CENTER);
  setupSerial();
  //myPort = new Serial(this, "/dev/cu.usbmodem1411", 9600);
  //size(640, 360);
  fullScreen();
  w = width;
  h = height;
  Ch1 = new Movie(this, "IBR_BJ.mp4");
  Ch2 = new Movie(this, "IBR_EXPO.mp4");
  Ch3 = new Movie(this, "IBR_WT.mp4");
  //Ch1.play();
  //Ch2.play();
  //Ch3.play();
  Ch1.loop();
  Ch2.loop();
  Ch3.loop();
  m1 = new SoundFile(this, "doppelganger.mp3");
  m2 = new SoundFile(this, "TRACK UNO.aiff");
  m3 = new SoundFile(this, "100000.mp3");
  m1.play();
  //m2.play();
  //m3.play();
  //m1.loop();
  //m2.loop();
  //m3.loop();
  //Ch1.pause();
  //Ch2.pause();
  //Ch3.pause();
}

void mousePressed() {
  counter+=1;

  if (counter==1) {
    m1.stop();
    m2.play();
   // m2.jump(15.3);
  }
  if (counter==2) {
    m2.stop();
    m3.play();
   // m3.jump(15.3);
  }
  if (counter==3) {
    m3.stop();
    m1.play();
    counter = 0;
  }
}

void keyPressed() {
  if (key == 'w' || key == 'W') {
    Ch1.speed(4.0);
    Ch2.speed(4.0);
    Ch3.speed(4.0);
  } else if (key == 'a' || key == 'A') {
    Ch1.speed(.5);
    Ch2.speed(.5);
    Ch3.speed(.5);
  } else if (key == 's' || key == 'S') {
    Ch1.speed(2.0);
    Ch2.speed(2.0);
    Ch3.speed(2.0);
  } else if (key == 'd' || key == 'D') {
    Ch1.speed(.25);
    Ch2.speed(.25);
    Ch3.speed(.25);
  } else if (key == 'p' || key == 'P') {
    Ch1.pause();
    Ch2.pause();
    Ch3.pause();
  } else {
    Ch1.speed(1.0);
    Ch2.speed(1.0);
    Ch3.speed(1.0);
    Ch1.play();
    Ch2.play();
    Ch3.play();
  }

  if (key == 'h' || key == 'H') {
    image(helpimg, width/2, height/2, width/2, height/2);
  }

  if (key == CODED) {
    if (keyCode == UP) {
      filter(INVERT);
    } else if (keyCode == DOWN) {
      filter(THRESHOLD);
    } else if (keyCode == LEFT) {
      filter(ERODE);
    } else if (keyCode == RIGHT) {
      filter(POSTERIZE, 2);
    } else if (keyCode == RIGHT) {
      filter(POSTERIZE, 2);
    }
  }
}
void draw() {
  updateSerial();
  //printArray(sensorValues);
  // to read the value from the Arduino
  /*while ( myPort.available() > 0) {
   valueFromArduino = char(myPort.read());
   }
   if (valueFromArduino != '0') {
   println(valueFromArduino);
   }
   if (valueFromArduino == '1') {
   println(valueFromArduino);
   image(Ch1, x, y, width, height);
   }
   if (valueFromArduino == '2') {
   println(valueFromArduino);
   image(Ch2, x, y, width, height);
   }
   if (valueFromArduino == '3') {
   println(valueFromArduino);
   image(Ch3, x, y, width, height);
   }*/


  vid =sensorValues[0];
  zoom = map(sensorValues[3], 0, 255, 1, 2); 
  zoom = round(zoom*10);
  zoom = zoom/10;
  //pan = map(sensorValues[1], 0, 255, w/2-(((w*zoom)-w)/2), w/2+(((w*zoom)-w)/2));
  //tilt = map(sensorValues[2], 0, 255, h/2-(((h*zoom)-h)/2), h/2+(((h*zoom)-h)/2));
  //pan = map(sensorValues[1], 0, 255, 320-(320*zoom-320), 320*zoom-320);
  // tilt = map(sensorValues[2], 0, 255, 180-(180*zoom-180), 180*zoom-180);
  if (pan<= width*zoom-width) {
  }

  if (tilt<= height*zoom-height) {
  }
  if (zoom>1.2) {
    pan = map(sensorValues[1], 0, 255, (w/2)-((w/2)*zoom-(w/2)), (w/4)*zoom);
    tilt = map(sensorValues[2], 0, 255, (h/2)-((h/2)*zoom-(h/2)), (h/4)*zoom);
  } else {
    pan = w/2;
    tilt = h/2;
  }

  if (vid == 1) {
    //    Ch1.read();
    pushMatrix();
    //translate(width/2, height/2);
    scale(zoom);
    image(Ch1, pan, tilt, w, h);
    popMatrix();
  }

  if (vid == 2) {
    pushMatrix();
    //translate(width/2, height/2);
    scale(zoom);
    image(Ch2, pan, tilt, w, h);
    popMatrix();
  }

  if (vid == 3) {
    pushMatrix();
    //translate(width/2, height/2);
    scale(zoom);
    image(Ch3, pan, tilt, w, h);
    popMatrix();
  }

  //println("pan=", pan, "tilt=", tilt, "zoom=", zoom, "vid=", vid);
}

void setupSerial() {
  printArray(Serial.list());
  myPort = new Serial(this, "/dev/cu.usbmodem1411", 9600);
  // WARNING!
  // You will definitely get an error here.
  // Change the PORT_INDEX to 0 and try running it again.
  // And then, check the list of the ports,
  // find the port "/dev/cu.usbmodem----" or "/dev/tty.usbmodem----" 
  // and replace PORT_INDEX above with the index number of the port.

  myPort.clear();
  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = 'n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}



void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = 'n'  Linefeed in ASCII
    if (myString != null) {
      println(myString);
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

// Called every time a new frame is available to read
void movieEvent(Movie m) {
  m.read();
  /*  if (m == Ch1) {
   Ch1.read();
   } else if (m == Ch2) {
   Ch2.read();
   } else if (m == Ch3) {
   Ch3.read();
   }*/
}

Leave a Reply