Nicholas Sanchez: Capstone Documentation

Introduction

Animatronics are, for all intents and purposes, the manifestation of puppetry in today’s age of media and technology. Whether pre programmed to run the same act or controlled in real time by a distant puppeteer, animatronics fill the niche once occupied only by marionettes, ventriloquist dummies, and shadow puppets. However, unlike its low-tech predecessors, the animatronic is composed of cutting edge and novel technology, and by no means limited to the physical restrictions present in traditional puppet theatre. By contrast, the Animatronic is embedded with new technologies while still maintaining the essence and ontology of the “puppet”.

With the aid of the IMA faculty, staff, and resources, while being equipped with the knowledge acquired over several years studying in this program, I set out to create my own animatronic; a puppet that, like those iconic and relevant to older puppetry, represented the intricacies and magic of the art form. At the same time, this puppet’s physical would vary from the archaic standard, as its corpus and means of manipulation would be largely filled by newer technologies.

The purpose of this capstone project was to create an animatronic that could be controlled in real-time via computer vision and projection mapping. The goal was to create a puppet that’s actions would mirror those of the puppeteer with minimal latency and accuracy, while its facial expressions would be mapped to those of the puppeteers. Using Arduino, Processing, Servos, and other tools, I set out to achieve this goal. And while the most recent iteration may not be pretty, it is my belief I achieved what my expectation was with a relatively high degree of success.

Pre Production

From my youth until now I have been fascinated with robots and robotics. I can remember the cartoons watching as a boy, and seeing the fantastical idea of what an autonomous mechanical being could be. I loved anime like Gundam, where pilots flew in gigantic armored robots, as well as movies like Terminator, where humanoid chrome cyborgs walked the Earth. Birthday parties would often take place at Chuck E. Cheese, where a band of animatronic animals sang and advertised pizza. Even my trips to Disneyland would fill me with awe, as I was amazed and enamored with prate, ghost, and animal animatronics. Needless to say, this was a field I have found interesting for a very long time.

Disney’s animatronics were of particular interest to me, due to their nuances and theatricality. For decades, Disney has been a pioneer and leader in the development of animatronics, and virtually invented the field. While the earliest animatronics were nothing more than simple and rather uninteresting animated animals, over the years Disney has conceived some of the most articulate automata in history. Most notable, of course, is the “Buzz Lightyear” animatronic in Disneyland’s “Astro Blaster’s” attraction. Here, a life-sized Buzz Lightyear stands before guests waiting to board the ride and provides contextual exposition. While the novelty of seeing Buzz in real life is unique, its most notable aspects include its design and nuance.

The Buzz Lightyear animatronic differs from its sibling animatronic because it is a hybrid of traditional animatronic composition and new media. While Buzz’s body articulates via some mechanical process (most likely pneumatics), his face is completely projected without any physical actuation. Imagine a movie canvas, but curved to match the shape of a human face. That was the head of this animatronic, and a hidden projector would project a face onto the head, creating the illusion of a fully articulate mouth and eyes.

 

What I loved about this animatronic is how the combination of physical mechanism and projection mapping really bring this character to life. The arms and torso, which move in real life, give the animatronic form and life. Yet, the projected face truly conveys the essence of the character, and creates the illusion of reality. When I later decided to make an animatronic for this capstone, I decided that whatever I would build should emulate Buzz’s structure.

I began this project as I do with all my projects; by pondering the idea behind it and how the project might be accomplished.

I started thinking about things that interested me, particularly with regards to digital fabrication. For instance, topics such as laser cutting and 3d printing were notions I considered expanding upon. However, in the end, my love for robotics prevailed.

Having already worked with robotics, I decided that I would draw upon previous experience to create a capstone relating to this subject. Once this was decided, my challenge then became thinking of what type of robot I should build. While autonomous and self-sufficient robots are appealing, I thought that I might perhaps focus on the more artistic and per formative robots: Animatronics!

Drawing from my adoration of this subject, and remembering some incredible examples (like Buzz Lightyear), I decided that would in part be mechanical, in part be projected. However, although I could ideate this animatronic’s physicality, I still needed to decide how I would control it. Questions arose such as whether it should be pre programmed or controlled in real time? If controlled in real time, what hardware and software would I need to do this? Ultimately, I decided on creating an animatronic that could be controlled in real-time and move its arms based on the operator’s arm motions, so as to mirror them. In addition, the animatronic’s face, which would be projected onto the head much like Buzz Lightyear, would blink and open its mouth in relation to the puppeteer’s eyes and mouth.

Now that I had an idea about what I was going to do, I decided to draw on past projects for inspiration and the knowledge to accomplish what I was working on. As stated above, I planned on making an animatronic with moveable arms and a projection mapped face. I scoured sources like Instructables.com for ideas and methods, and reflected on my past projects had been successful and pertinent to this capstone.

In the summer of 2016, I was a student researcher in NYU Shanghai’s DURF program. In this program, a team of peers and I worked to create a robotic arm that could be moved by tracking a user’s head. The crux of this project was the robotic arm, which was constructed from 6 high torque servos and driven via Arduino. My experience with this project allowed me to work with multiple servo control using Arduino programming and serial communication. Given this experience, I already had the foundation for how to build and operate two robotic arms for the animatronic.

In terms of how to control the arms in real-time, the challenge became considering the best and most efficient methods for mapping out human arm motion while also maintaining a workflow to send that data to the Arduino. My initial idea was to use Microsoft’s Kinect in conjunction with the software Processing, as there have been libraries and examples made whereby the two work in tandem to record a user’s arms and joint positions. Utilizing the Kinect’s infrared sensor, Processing can generate a 3D image of a user and attribute a superficial skeleton to their structure. My hope was to capture the positions and angles of those joints in the arms, like the shoulder, elbow, and wrist, so that I could then translate this data into servo friendly programming.

Unfortunately, as you will see later, this process didn’t work out because, despite my best efforts, this Processing Kinect dynamic can only work when using a 64-bit PC, running 64 bit Windows 10. Despite my best efforts, which included using several different PCs and even buying my own just to operate this system, for one reason or another, establishing the skeletal tracking failed.

Now, the ability to control the Animatronic in real-time was an integral component of this project. So my remedy to using the Kinect was to instead use Processing’s built in computer vision abilities to track light. The idea was, if I held flashlights in both of my arms and created a Processing sketch where the values of those lights where captured, then I could create my own version of skeletal tracking. To create this code was simple, as I had already done a project that captured lights and captured the light’s x, y coordinates as perceived by the computer’s camera. By adjusting this code for the purposes of my animatronic, I could have a good starting point for user arm control.

Similarly, I had already done a project with projection mapping and face mapping in the previous semester. My “Singing Spooky Pumpkins” project included a host of Pumpkins that sang the cultural classic song “The Monster Mash”. In this project, one of the pumpkins sat idle until a user sang along with the song, at which point the pumpkin would also “sing”, as its eyes and mouth moved based on the singer’s facial expression.

Using the knowledge gained from research and my own prior projects, I set out to build this animatronic.

Now that I had an idea about what I was going to do and how I might accomplish that, I began to get organized by focusing on what I needed and how to best schedule my time.

As far as materials go, I knew I needed the following:

  • Servos
  • Servo holders
  • A power supply (for the Servos)
  • An Arduino
  • PVC pipe for the animatronic’s frame
  • Miscellaneous circuitry (perfboards, wires, soldering, etc.)
  • Laser cutting material
  • Hot Glue
  • Screws and bolts
  • A power supply
  • Spray paint

 

For the softwares, I would need:

  • Processing (to map face and arms and send that data to the projector and servos, respectively)
  • Arduino (to drive the servos)
  • Face OSC (for face recognition)
  • Tinkercad (for 3D modeling)
  • Adobe Illustrator and Photoshop (to design the animatronic’s face and body
  • Mad Mapper (for projection mapping)

 

For computing, controlling, and presentation, I would need:

  • A computer with 64 bit processor and camera (two if you are using one to read the face and the other to map the arms)
  • A projector

 

For fabrication, I would need:

  • A laser cutter
  • A 3D printer
  • Drill, dremmel, hack saw, and grip or vice

In addition to these, I also needed to organize the project into achievable sections. So, I divided each aspect of the animatronic into a category that could be achieved independent of the others. The idea was that by separating these aspects, I could “divide and conquer” in a timely manner. The categories I conceived were thus Face (facial animation, facial mapping, and projection) and Body (construction, servo arrangement, servo control, body tracking, and aesthetic). After considering this, I set out on my constructive journey.

Production

The animatronic could be divided broadly into two categories; it’s face and its body. Each of these categories brought about its own challenges in terms of construction and design. I created some 3D models to help myself visualize and better understand what I would build. These models helped me in many ways.

For instance, I knew that I wanted to build a head with two eyes and a mouth that would be projected on to. I knew I needed a physical frame or skeleton, and wanted an exoskeleton to cover these parts. The arm, which would need to move to emulate the puppeteers, needed to have three points of motion, or degrees of freedom; two in the shoulder, and one in the elbow. I think that my 3D models provided a pretty basic but explanatory idea of what the animatronic would be. It provided a springboard from which I could further my development.

As there had been nothing physical constructed (all I had were 3D models at this point), I was required to build a paper prototype. The idea of the paper prototype is to create a model sans technology, to give others an idea of how it should work, while also articulating the basic fundamentals of its ability. To make my paper prototype, I used spare boxed and cardboard to create a body. I knew that the arms needed to bend, so I cut the arm boxes where the “elbows” would be, and used a flap along the “shoulder”. Attached to this flap were some empty soda bottles, which fit conveniently into the shoulder socket. This allowed me to rotate the shoulder and added that final degree of freedom.

For the paper prototype’s head, I cut a soft ball in half and glued them to the head. I added a paper nose, and then used sticky notes with different mouth positions to simulate what the real projection might show.

I showed the paper prototype to my classmates and received valuable feedback from the group. For instance, I learned that the Wii Nunchuk, which I originally had planned on using to control the arms, was actually not a very good medium. Therefore, I was advised to use the Microsoft Kinect instead. In addition, this model was rather big, and I was encouraged to scale it down. After this session, with this advice in mind, I set out to build the first iteration of my animatronic.

First Iteration

Now the first iteration was by no means a finished product. In fact, I designed it to be more of a “proof of concept” than an actual model. The idea was to get a working model functional, not pretty or high quality. So, using cheap materials and scrap, I began designing it. I started out by buying some PVC piping for the animatronic’s skeletal frame. Unfortunately, the first pipes were way too big, so I returned to the hardware store to get some smaller tubes. Using these tubes, I build the basic frame easily.

For the animatronic head, I used a discarded box and plastered one side with polymer clay. This polymer clay was molded to resemble my initial 3D model’s face.

Now, initially I wanted to use nine-gram servos to be the animatronic’s arms. So, using Tinkercad, I designed several holders for these servos, as none currently exist. After a few iterations, I finally decided on a design and printed them out. I assembled these arms and placed them on the animatronic. After I assembled the arms, all that was left was to

For the facial projection, I needed to create some basic assets that could be animated in Processing. Using Adobe Photoshop I created “eyes” and “mouths” for the animatronic, and exported them s PNGs. there were seven PNGs for the eyes, each to represent a different level of openness. For instance, there was one PNG for when the lids were closed, and one when the lids were wide open. In addition, I made a mouth to represent the mouth positions for all nine phonetic sounds English speakers make while speaking.

Using Face OSC with Processing, I was able to map several different points on my face. For instance, Face OSC would map the levels of my eyebrows and mouth, and Processing would receive and interpret these points, translating them into the Cartesian x y coordinates. I created a processing code that used these points and then mapped the PNGs from the face to the appropriate corresponding levels. For instance, if my eyes were opened wide, then Processing would display the PNG depicting the widest eye. If my mouth were to make an “a” sound, Processing would show the PNG depicting the “a” sound. Unfortunately, do to Face OSC’s buggy nature, the mouth mapping didn’t work too well, and therefore failed to depict the right mouth at the right time. Despite this, Face OSC did a great job mapping my eyes. After creating this initial code, all that remained was to project this onto a surface.

My original plan to control the servos in real time was to make a processing sketch that used 3 dimensional skeletal tracking via the Microsoft Kinect and translated that into servo-friendly values, that would then be sent to Arduino and used to drive the servos. Though seemingly complex, the theory and implementation are easy enough. However, for this first iteration, I instead used a processing sketch that did not incorporate skeletal tracking. Instead, an in program GUI would allow you to control the servos. This is how you could control the first iteration’s arms.

I presented this first iteration to the class to help the design and give further feedback. Unfortunately, nine-gram servos are generally low torque, and have very weak nylon gears (which are prone to breaking).  The feedback I received was ultimately positive, because my peers understood that I was still in the design process. Generally, they understood what I was trying to accomplish and saw that the basics were there.

The improvements that I noted from this testing and iteration were the following. First, the realistic mouth PNGs were not achieving their intended goal, because they were not robot-looking and weren’t mapping properly. The next was to use stronger, better servos. In addition, the arm motion was quite buggy and inconsistent, because I was not supplying each servo with enough power. In the next iteration, this would have to be improved

Second Iteration

This time around, I used longer pieces of PVC piping to construct the second iterations frame. After building this version, I spray painted it silver, as much of this frame would be visible. I drilled two holes into each of the frame’s “feet” so that I could later screw it into a frame.

My initial idea for the animatronic’s head was that it would look “robotic” and have a cute face which allowed me to project on it’s eyes and mouth. Physically, this meant that the eye and mouth area needed to be smooth, matte, and projection friendly. So long as these requirements were met, then the rest of the head design could be purely aesthetic. In Tinkercad, I designed a head that resembled my initial sketches in a more stylized and 3D printable manner.

For the arms, as previously stated, I needed to use larger servos. I upgraded to the MGS-960, as this was a cheap alternative. This servo has metal gears and higher torque, allowing the animatronic better articulation than before. To structurally adhere these servos, I used braces specifically designed to hold servos of this size. I designed the arm so that one servo would serve as a shoulder and control the shoulder’s “yaw”. Attached to this was a secondary shoulder servo, which would only control the shoulder’s “pitch”. This second servo would be attached to braces representing the animatronics’ upper arm. At the other end of the forearm was the third servo, which functioned as an elbow. This servo connected the upper arm to the forearm and hand. I used the braces to construct the arm and after tediously attaching the servos, managed to make this configuration work with the previous arm control design.

I purchased a computer’s power supply to power the six servos. This power supply could supply 5 volts at 12 Amps, which was ideal because each servo requires above 1 Amp and 5 Volts, allowing the servos to operate under nominal conditions. This meant that they would turn, move, and otherwise rotate better than the 9-gram versions.

The next iteration of the facial mapping code changed very little. I removed the mouth PNGs in favor of five mouth “vents”. These vents would grow longer and glow orange when the operator opened their mouth, and turn grey and grow short when not speaking.

Now, the most notable aspect of this second iteration was the new code used for controlling the arm. Initially, my plan was to use the Microsoft Kinect to control the animatronics’ arms. By using the Kinect in tandem with Processing, I should theoretically be able to use skeletal tracking and translate values of the arm and joints to servo friendly values, which would then be utilized by Arduino. Unfortunately, this failed. I cannot overstate the pain I went through to try and get this working, but I will document this as follows. While the Kinect and Processing can work together on mac IOS, the skeletal tracking library does not come included in this package. In order to attain skeletal tracking, one must use a PC. So, after figuring this out, I went to buy a 64-bit PC that ran 64-bit version of Windows 10. Unfortunately, this computer failed to work, as did the next two PCs (with identical specs). So, taking this as an omen that the skeletal tracking was not for this project, I cheated and created a new GUI for controlling the skeleton.

Using light tracking with processing, I was able to attain the x-y coordinates of any color I chose to track. This meant that, say I used a flashlight for instance, I could track the x-y position of that flashlight and record those values. I elected then to use two flashlights, one for each hand, so that I could get the x-y coordinates for each hand. Then, I created a scheme of reverse kinematics, such that values would be generated that represented the shoulder, elbow and wrist based off of the color tracking. This required tweaking, but once done, I managed to successfully generate values for the servos.

I laser cut a base to hold the animatronic, and attached the animatronic’s frame to it. In addition, I took some scrap and fashioned them into aesthetic components to embellish the animatronic’s body. After spray-painting the base and body parts to match, I assembled the physical body, and was ready to bring everything together.

Finally, I hooked everything up (the servos to power and Arduino; the Arduino to the computer scanning the body via processing; the projector to the computer running the processing sketch scanning my face). After some more adjustments in the space, I filmed myself with all components of the second iteration working with a sufficient degree of success.

This final iteration gave me much to think about. While I was happy with the final design, there are still many aspects to be fine-tuned for the next iteration. I plan on completing one more, which will be being an improvement on all the others.

 

Leave a Reply