Museum of Mediocre Artefacts: Nick Sanchez’s Documentation

Long ago, internationally infamous inventor Jingles Fakhr sought to make his name known… After a long and toilsome inventing career, creating useless and inoperable oddities, he finally made his breakthrough discovery… the Perpetual Light Machine!

For our project, we sought to make an exhibit whereby users would be drawn down a long and scary dark hallway. At the end, a contraption of some sort would sit idly, willing unsuspecting guests to draw nearer and observe it. Once they did, we behind the scenes would do something to scare them. This was the premise.

Much effort and time was spent on ideation, and there were many ideas that were either dropped entirely or subtly embedded into the final concept. This was tedious, and occupied much of our time. Nevertheless, we persisted, and eventually decided on the loose idea centered around this fictional inventor named “Jingles Fakhr”. The story was that Dr. Jingles was one of the many inventors during the 1800s, who like his contemporaries Edison and Tesla, sought to experiment with electricity and light. Though many of his inventions didn’t work (some of which we would show as exhibition to provide context during the show), his one successful invention was the “Perpetual Light Machine”. The conflict of our story arises when we share that this invention has dubious origins, causing many who view it to feel uneasy, hallucinate, and in some cases, go crazy. It is for these “reasons” that we keep this artefact hidden behind a curtain, and discourage all but the most brave guests to venture in and observe it. After they do, we would go about staging our fright.

We had picked the corner of the IMA floor where the lockers stood as the space where we would stage this experience. To create this stage, we angled the lockers so that they narrowed as you walked towards the end of the hallway. The idea was to place the “Perpetual Light Machine” towards the end of the hallway, such that people would get considerably claustrophobic as they neared it. Once these audience members walk towards it and observe it, we inauspiciously place a costumed mannequin behind them. Once the mannequin is in position, a similarly costumed actor would would jump out at them, causing them to recoil and turn around. At this point, they would suddenly see the mannequin that wasn’t behind them before, and become even more terrified.

This was the plan. The challenge became planning for it. We coordinated with IMA staff to order several key props and settings off of Taobao. These things included curtains to cover the entire stage, a mannequin, a head, and some masks and hoods to costume the mannequin and actor, a head (for the dress form).

Initially, we planned to fabricate some broken electronics to represent the two intial oddities before the final Perpetual Motion Machine. To be honest, I made an automaton that could have turned with Arduino, but never implemented the appropriate circuitry to actually animate it. Nevertheless, this “automaton” was creepy, and clearly dysfuncional, which was the point. In addition, we never really got around to creating a phonograph-like pair of headphones. Consequently, we only had the dysfunctional automaton to show as the pretext to the Perpetual Light Machine.

The Perpetual Light Machine prop was a borrowed student project from Sun Jingyi. It was a 3mm acrylic translucent pyramid, which would glow based on the Arduino-LED setup underneath it.

Setting it up was not to difficult, but we improvised as we went along, making this entire process a little more time consuming. Nevertheless, the end result was rewarding and entirely worth it.

fright4 fright3 fright2 fright1

Nicholas Sanchez (and Abiral) Scaring the Computer

The idea of instilling fear in a person seems intuitive, as there are many ways to accomplish this task, be it telling them a horrifying story or even scaring them via surprise. But to instill fear into a computer is a different idea altogether, not only because they are different beings that humanity has yet to fully comprehend, but also because the parameters by which the computer “feels” fear are not necessarily easy to define. By anthropomorphizing the computer, and using techniques with Max MSP, we managed to turn the computer into a gear in our metaphorical demonstration.

Our idea was simple. We would “scare” the computer using psychological horror. How? well a popular and particularly disturbing sub genre of horror is body horror, named so for its grotesque mutilation of human bodies. Why does seeing the human body in contorted states upset us? what about it is so horrifying? That is not for me to say, but the principle that seeing that imagery frightens people is a well grounded idea. We decided to apply this idea to the computer, and stage its fright by an analogous means.

To do this, we staged an interrogation were we asked the computer for a set of passwords. The computer would, of course, deny us access, so we would use several methods to “coerce” the computer. After each refusal, we would use some psychological method to scare the computer. The computer would respond and gradually emulate the experience of becoming increasingly terrified.

We crafted a script of this interrogation, and then set about creating the animations for the computer. To represent the computer, we took an image of “HAL 9000” from “2001: A Space Odyssey”, and edited in photoshop so that it could be easily manipulated (turned on and off) in afterEffects. After editing these layers, we imported them into afterEffects to animate the computer getting “scared” and create each response to the interrogation.

To make the robot voice, we used http://ttsreader.com/ to generate some text that the computer would speak during the interrogation. We took these sound files, edited them a bit, and then also exported them to afterEffects.

In afterEffects, we animated the computer such that the eye lit up when it was speaking. By using regular expressions, we mapped the audio levels to keyframes, and then mapped the brightness of the eye to these keynotes. In doing this, we managed to make the eye brightest when the voice was speaking, and grow dimmer in between words or when the audio had little sound. After we had made several clips of each reactionary dialogue as well as the end, we exported them from after effects for use in MAX MSP.

Editing the computer's "eye"

Editing the computer’s “eye”

MAX MSP is a new software for us, and its implementation was challenging. While it has many advantages, such as its ability to render video and such, it unfortunately lacks much of the logic that more traditional languages use. In addition, this software is visual, and not coded line by line, which presented another challenge. Nevertheless, after hours of toil and experimentation, we managed to create a MAX MSP build that played each of the videos on command.

The next step was to integrate Arduino into the sketch. Initially, we sought to use a pressure sensor to activate each of these subsequent videos. According to our script, after each of our “motivational” actions, a pressure sensor should activate the the respective video. Unfortunately, despite our best efforts with multiple force and touch sensors, we couldn’t create a consistent response from these sensors. Ultimately, we opted to use a pushbutton to activate the computer’s responses.

the vibration sensor

the vibration sensor

Abiral experimenting with he pressure sensor

Abiral experimenting with he pressure sensor

installing the push button onto the base of the box. This would activate each video.

installing the push button onto the base of the box. This would activate each video.

 

After finally having all the technical aspects completed, we just had to get our performance in order. Using various miscellaneous discarded computer parts and some stage presence, we pulled together a performance that conveyed the computer experiencing horror. Enjoy.

 

Nicholas Sanchez Ghost-o-meter

The project for this week was to create some apparatus that gauged and recorded a user’s fear. While there are many ways to do this (Computer vision via face tracking, eye tracking, pixel differentiation, etc), I elected to work with Galvanic skin response instead. To do this, I decided to modify the project that we did in class, but amend it so that that the item was more interesting and interactive.

My idea was to create a “Gohst-o-meter”, or a ghost detection device. While this device would not actually detect ghosts, it would gauge the user’s GSR and use this data to “detect” ghosts. The theory of Galvanic Skin Response suggests that when a human enters a state of arousal (experiencing strong emotions, such as fear), their sweat glands will produce sweat, which is conductive. Thus, the human’s skin’s natural resistance goes down as the sweat makes the skin more conductive. Devices, like the Arduino’s analog input pins, are sensitive to this change in resistance, and thereby can gauge galvanic skin response, albeit to a certain degree. And while the theory behindGalvanic Skin Response is far from an exact science, like the means of gauging GSR, it can still be an indicator of a user’s heightened sense of fear.

The “Ghost-o-meter” would essentially be a box with a handle. Inside the box would be the Arduino, with two extruding extending into the handle. These probes would be places such that both constantly made contact with the user’s hand, thereby allowing the user’s GSR to be captured. The box would have some physical indication of the GSR, like an LED or buzzer, that would change flashing/beeping frequency based on the user’s GSR input. While it would not necessarily detect ghosts, it would detect how scared the user was. Say, for example, the user is in a well lit, comfortable space, and not experiencing any strong emotions. After calibrating to their current GSR, the “Ghost-o-meter” would buzz and blink at high intervals. Now, if they were to enter a “haunted” dark and scary space, perhaps they would be a little scared, and their GSR would rise, and the intervals at which the”Ghost-o-meter” buzzed and flashed would decrease. As the machine is now buzzing and flashing more, the user thus infers that there are ghosts around. Ideally, this heightened state of arousal would cause the “Ghost-o-meter” to buzz and blink even more rapidly, therefore indicating the strong presence of a “ghost”.

To build this project, I used 3mm MDF board, some 3 mm clear Acrylic, a spray-painted 3.2mm PVC pipe, miscellaneous electronics, and of course, Arduino.

ghost1

I designed a box in Adobe illustrator to house the electronics. I then iterated through until I had a box that could fir the components and the handle. I then laser cut this box such that 5 sides were MDF, and the top was clear acrylic, so that a user could see the inside electronics. To make the handle, I took some PVC from another project and put conductive copper tape on both sides I then soldered one wire to each of these strips of tape. These would serve as the probes. I took these wires through the pipe’s insides and and plugged one wire into 5 volts on the Arduino, and the other to the Analog 0 pin. I also added some LEDs and a buzzer for good measure, to act as the feedback mechanisms.

ghost2

Finally, to code this I had several issues. Mainly that each person has a different initial or resting GSR. So i took some code that collected average data over a few second and spliced it into my code. This new code allowed for the “Ghost-o-meter” to calibrate to the user’s resing GSR by averaging values collected over the first 5 seconds. This would be the lower threshold by which the Arduino would compare the GSR. Ultimately, this allowed me to set the lowest value to that of the user’s natural GSR, so that any escalation would cause the “Ghost-o-meter” to start buzzing or blinking rapidly.

ghost3

ghost4

Unfortunately, the first round of testing showed that the probes were a little to sensitive. So, I added a semi-conductive foam around these probes to decrease its sensitivity. This worked well, but the sensitivity was still to high. What this did change, however, was grip. adding the foam changed the handle from a GSR detector to a pressure sensor. This meant that if the user gripped the handlebar tightly, the “Ghost-o-meter” would act as if a “ghost” appeared by buzzing and flashing rapidly. By contrast, a slight grip would not elicit this response. While this addition took away from the “Ghost-o-meter”‘s GSR reading, I think it could still gauge fear because often when people are scared, they grip items tighter. Therefore, in some sense this “Ghost-o-meter” does record fear, just not in the way I originally intended.

Here are a few (staged) videos of the Ghost-o-meter in action!

Nicholas Sanchez: Internet of Rube Goldberg

For this final I wanted to create a “simple” project made complex via integrating the “Internet of Things”. Drawing inspiration from the Rube Goldberg midterm, I sought to make a Rube Goldberg machine that was instigated via IoT, and at some points facilitated by IoT. Essentially, this machine would be ignited by turning on a button, which would then connect to twitter and tweet. This tweet would be read by the MAKR1000, and then an actuator would activate the next phase in the Rube Goldberg machine. This process would cycle about three times before finally turning an LED on, and tweeting something along the lines of “Hey, the LED is on”.

WHile such a machine would be a huge hardware concern, it was more difficult for me to configure the micro controller with the MAKR1000. Not only because of the technique, but also because there were no MAKR1000s left for me to use. After fooling around with Ethernet shields and and Arduino Yun, I was introduced to the Particle Photon micro controller – a godsend. After learning how to use this board (which was a lot, considering it has little but significant differences from typical Arduinos) Fortunately, there are many online resources for how to use the Photon in an IoT situation.

Ultimately, I decided to connect the particle to Thingspeak, and use Thingspeak as a medium to connect with Twitter. Thingspeak is an online repository where people using internet boards (such as the ESP8266) can save information. In addition to its data storage and visualization, Thinspeak offers built in applications for connecting to Twitter. By using Thingspeak’s applications ThingtTweet, ThingControl, and ThingHTTP, I was able to scrape my twitter account for certain filters or trigger (in this case, “#Thingspeak” as a filter and “blink” as a trigger). This made it such that if i tweeted a message with both the filter and trigger, this would be written to Thingspeak, which in turn would trigger my Photon. when I typed in a tweet with the same filter and the trigger “led”, this would tell my Photon to turn off the mechanisms.

This meant that I could activate the Rube Goldberg mechanisms from twitter. However, I still needed a way to tweet. The cool bit about this was that I used the Photon to directly tweet, via the TCP client library. After some technical difficulty and another few learning curves, finally it all came together. I had the code working such that if I typed in the filter and trigger, as stated above, the photon would turn on one motor, then another motor, and finally light up an LED, before tweeting “#thingspeak led”. Once this tweet was sent, the Photon would scrape that tweet from twitter, and know to turn itself off.

Screen Shot 2017-05-20 at 10.21.47 AM

Now the next part was building a physical housing. I knew that I wanted gears, pistons, smoke, and an LED. The idea was, when the trigger was activated, the first motor would turn the gears and send a tweet saying “gears are on”. Then the pistons would turn, and also send a tweet. Next, the smoke would turn on and tweet. Finally, the LED would turn on. Due to logistical difficulties, I had to dismiss the smoke, but kept the pistons and gears. Using Illustrator, I designed the SVG case for my animatronic. I iterated through it twice, and eventually had a working model.

Overall, this was a great assignment and fun exercise. It was very challenging, but fortunately the resources were available to me. It would be fun to pursue this a little further, and get the machine working well with the smoke. But for now, I am happy with how the machine turned out.

The outside of the mechanism

The outside of the mechanism

The inner mechanisms: motor connected to piston shaft, motor connected to gear, Particle Photon

The inner mechanisms: motor connected to piston shaft, motor connected to gear, Particle Photon

The madness that was this circuit

The madness that was this circuit

Nicholas Sanchez: Capstone Documentation

Introduction

Animatronics are, for all intents and purposes, the manifestation of puppetry in today’s age of media and technology. Whether pre programmed to run the same act or controlled in real time by a distant puppeteer, animatronics fill the niche once occupied only by marionettes, ventriloquist dummies, and shadow puppets. However, unlike its low-tech predecessors, the animatronic is composed of cutting edge and novel technology, and by no means limited to the physical restrictions present in traditional puppet theatre. By contrast, the Animatronic is embedded with new technologies while still maintaining the essence and ontology of the “puppet”.

With the aid of the IMA faculty, staff, and resources, while being equipped with the knowledge acquired over several years studying in this program, I set out to create my own animatronic; a puppet that, like those iconic and relevant to older puppetry, represented the intricacies and magic of the art form. At the same time, this puppet’s physical would vary from the archaic standard, as its corpus and means of manipulation would be largely filled by newer technologies.

The purpose of this capstone project was to create an animatronic that could be controlled in real-time via computer vision and projection mapping. The goal was to create a puppet that’s actions would mirror those of the puppeteer with minimal latency and accuracy, while its facial expressions would be mapped to those of the puppeteers. Using Arduino, Processing, Servos, and other tools, I set out to achieve this goal. And while the most recent iteration may not be pretty, it is my belief I achieved what my expectation was with a relatively high degree of success.

Pre Production

From my youth until now I have been fascinated with robots and robotics. I can remember the cartoons watching as a boy, and seeing the fantastical idea of what an autonomous mechanical being could be. I loved anime like Gundam, where pilots flew in gigantic armored robots, as well as movies like Terminator, where humanoid chrome cyborgs walked the Earth. Birthday parties would often take place at Chuck E. Cheese, where a band of animatronic animals sang and advertised pizza. Even my trips to Disneyland would fill me with awe, as I was amazed and enamored with prate, ghost, and animal animatronics. Needless to say, this was a field I have found interesting for a very long time.

Disney’s animatronics were of particular interest to me, due to their nuances and theatricality. For decades, Disney has been a pioneer and leader in the development of animatronics, and virtually invented the field. While the earliest animatronics were nothing more than simple and rather uninteresting animated animals, over the years Disney has conceived some of the most articulate automata in history. Most notable, of course, is the “Buzz Lightyear” animatronic in Disneyland’s “Astro Blaster’s” attraction. Here, a life-sized Buzz Lightyear stands before guests waiting to board the ride and provides contextual exposition. While the novelty of seeing Buzz in real life is unique, its most notable aspects include its design and nuance.

The Buzz Lightyear animatronic differs from its sibling animatronic because it is a hybrid of traditional animatronic composition and new media. While Buzz’s body articulates via some mechanical process (most likely pneumatics), his face is completely projected without any physical actuation. Imagine a movie canvas, but curved to match the shape of a human face. That was the head of this animatronic, and a hidden projector would project a face onto the head, creating the illusion of a fully articulate mouth and eyes.

 

What I loved about this animatronic is how the combination of physical mechanism and projection mapping really bring this character to life. The arms and torso, which move in real life, give the animatronic form and life. Yet, the projected face truly conveys the essence of the character, and creates the illusion of reality. When I later decided to make an animatronic for this capstone, I decided that whatever I would build should emulate Buzz’s structure.

I began this project as I do with all my projects; by pondering the idea behind it and how the project might be accomplished.

I started thinking about things that interested me, particularly with regards to digital fabrication. For instance, topics such as laser cutting and 3d printing were notions I considered expanding upon. However, in the end, my love for robotics prevailed.

Having already worked with robotics, I decided that I would draw upon previous experience to create a capstone relating to this subject. Once this was decided, my challenge then became thinking of what type of robot I should build. While autonomous and self-sufficient robots are appealing, I thought that I might perhaps focus on the more artistic and per formative robots: Animatronics!

Drawing from my adoration of this subject, and remembering some incredible examples (like Buzz Lightyear), I decided that would in part be mechanical, in part be projected. However, although I could ideate this animatronic’s physicality, I still needed to decide how I would control it. Questions arose such as whether it should be pre programmed or controlled in real time? If controlled in real time, what hardware and software would I need to do this? Ultimately, I decided on creating an animatronic that could be controlled in real-time and move its arms based on the operator’s arm motions, so as to mirror them. In addition, the animatronic’s face, which would be projected onto the head much like Buzz Lightyear, would blink and open its mouth in relation to the puppeteer’s eyes and mouth.

Now that I had an idea about what I was going to do, I decided to draw on past projects for inspiration and the knowledge to accomplish what I was working on. As stated above, I planned on making an animatronic with moveable arms and a projection mapped face. I scoured sources like Instructables.com for ideas and methods, and reflected on my past projects had been successful and pertinent to this capstone.

In the summer of 2016, I was a student researcher in NYU Shanghai’s DURF program. In this program, a team of peers and I worked to create a robotic arm that could be moved by tracking a user’s head. The crux of this project was the robotic arm, which was constructed from 6 high torque servos and driven via Arduino. My experience with this project allowed me to work with multiple servo control using Arduino programming and serial communication. Given this experience, I already had the foundation for how to build and operate two robotic arms for the animatronic.

In terms of how to control the arms in real-time, the challenge became considering the best and most efficient methods for mapping out human arm motion while also maintaining a workflow to send that data to the Arduino. My initial idea was to use Microsoft’s Kinect in conjunction with the software Processing, as there have been libraries and examples made whereby the two work in tandem to record a user’s arms and joint positions. Utilizing the Kinect’s infrared sensor, Processing can generate a 3D image of a user and attribute a superficial skeleton to their structure. My hope was to capture the positions and angles of those joints in the arms, like the shoulder, elbow, and wrist, so that I could then translate this data into servo friendly programming.

Unfortunately, as you will see later, this process didn’t work out because, despite my best efforts, this Processing Kinect dynamic can only work when using a 64-bit PC, running 64 bit Windows 10. Despite my best efforts, which included using several different PCs and even buying my own just to operate this system, for one reason or another, establishing the skeletal tracking failed.

Now, the ability to control the Animatronic in real-time was an integral component of this project. So my remedy to using the Kinect was to instead use Processing’s built in computer vision abilities to track light. The idea was, if I held flashlights in both of my arms and created a Processing sketch where the values of those lights where captured, then I could create my own version of skeletal tracking. To create this code was simple, as I had already done a project that captured lights and captured the light’s x, y coordinates as perceived by the computer’s camera. By adjusting this code for the purposes of my animatronic, I could have a good starting point for user arm control.

Similarly, I had already done a project with projection mapping and face mapping in the previous semester. My “Singing Spooky Pumpkins” project included a host of Pumpkins that sang the cultural classic song “The Monster Mash”. In this project, one of the pumpkins sat idle until a user sang along with the song, at which point the pumpkin would also “sing”, as its eyes and mouth moved based on the singer’s facial expression.

Using the knowledge gained from research and my own prior projects, I set out to build this animatronic.

Now that I had an idea about what I was going to do and how I might accomplish that, I began to get organized by focusing on what I needed and how to best schedule my time.

As far as materials go, I knew I needed the following:

  • Servos
  • Servo holders
  • A power supply (for the Servos)
  • An Arduino
  • PVC pipe for the animatronic’s frame
  • Miscellaneous circuitry (perfboards, wires, soldering, etc.)
  • Laser cutting material
  • Hot Glue
  • Screws and bolts
  • A power supply
  • Spray paint

 

For the softwares, I would need:

  • Processing (to map face and arms and send that data to the projector and servos, respectively)
  • Arduino (to drive the servos)
  • Face OSC (for face recognition)
  • Tinkercad (for 3D modeling)
  • Adobe Illustrator and Photoshop (to design the animatronic’s face and body
  • Mad Mapper (for projection mapping)

 

For computing, controlling, and presentation, I would need:

  • A computer with 64 bit processor and camera (two if you are using one to read the face and the other to map the arms)
  • A projector

 

For fabrication, I would need:

  • A laser cutter
  • A 3D printer
  • Drill, dremmel, hack saw, and grip or vice

In addition to these, I also needed to organize the project into achievable sections. So, I divided each aspect of the animatronic into a category that could be achieved independent of the others. The idea was that by separating these aspects, I could “divide and conquer” in a timely manner. The categories I conceived were thus Face (facial animation, facial mapping, and projection) and Body (construction, servo arrangement, servo control, body tracking, and aesthetic). After considering this, I set out on my constructive journey.

Production

The animatronic could be divided broadly into two categories; it’s face and its body. Each of these categories brought about its own challenges in terms of construction and design. I created some 3D models to help myself visualize and better understand what I would build. These models helped me in many ways.

For instance, I knew that I wanted to build a head with two eyes and a mouth that would be projected on to. I knew I needed a physical frame or skeleton, and wanted an exoskeleton to cover these parts. The arm, which would need to move to emulate the puppeteers, needed to have three points of motion, or degrees of freedom; two in the shoulder, and one in the elbow. I think that my 3D models provided a pretty basic but explanatory idea of what the animatronic would be. It provided a springboard from which I could further my development.

As there had been nothing physical constructed (all I had were 3D models at this point), I was required to build a paper prototype. The idea of the paper prototype is to create a model sans technology, to give others an idea of how it should work, while also articulating the basic fundamentals of its ability. To make my paper prototype, I used spare boxed and cardboard to create a body. I knew that the arms needed to bend, so I cut the arm boxes where the “elbows” would be, and used a flap along the “shoulder”. Attached to this flap were some empty soda bottles, which fit conveniently into the shoulder socket. This allowed me to rotate the shoulder and added that final degree of freedom.

For the paper prototype’s head, I cut a soft ball in half and glued them to the head. I added a paper nose, and then used sticky notes with different mouth positions to simulate what the real projection might show.

I showed the paper prototype to my classmates and received valuable feedback from the group. For instance, I learned that the Wii Nunchuk, which I originally had planned on using to control the arms, was actually not a very good medium. Therefore, I was advised to use the Microsoft Kinect instead. In addition, this model was rather big, and I was encouraged to scale it down. After this session, with this advice in mind, I set out to build the first iteration of my animatronic.

First Iteration

Now the first iteration was by no means a finished product. In fact, I designed it to be more of a “proof of concept” than an actual model. The idea was to get a working model functional, not pretty or high quality. So, using cheap materials and scrap, I began designing it. I started out by buying some PVC piping for the animatronic’s skeletal frame. Unfortunately, the first pipes were way too big, so I returned to the hardware store to get some smaller tubes. Using these tubes, I build the basic frame easily.

For the animatronic head, I used a discarded box and plastered one side with polymer clay. This polymer clay was molded to resemble my initial 3D model’s face.

Now, initially I wanted to use nine-gram servos to be the animatronic’s arms. So, using Tinkercad, I designed several holders for these servos, as none currently exist. After a few iterations, I finally decided on a design and printed them out. I assembled these arms and placed them on the animatronic. After I assembled the arms, all that was left was to

For the facial projection, I needed to create some basic assets that could be animated in Processing. Using Adobe Photoshop I created “eyes” and “mouths” for the animatronic, and exported them s PNGs. there were seven PNGs for the eyes, each to represent a different level of openness. For instance, there was one PNG for when the lids were closed, and one when the lids were wide open. In addition, I made a mouth to represent the mouth positions for all nine phonetic sounds English speakers make while speaking.

Using Face OSC with Processing, I was able to map several different points on my face. For instance, Face OSC would map the levels of my eyebrows and mouth, and Processing would receive and interpret these points, translating them into the Cartesian x y coordinates. I created a processing code that used these points and then mapped the PNGs from the face to the appropriate corresponding levels. For instance, if my eyes were opened wide, then Processing would display the PNG depicting the widest eye. If my mouth were to make an “a” sound, Processing would show the PNG depicting the “a” sound. Unfortunately, do to Face OSC’s buggy nature, the mouth mapping didn’t work too well, and therefore failed to depict the right mouth at the right time. Despite this, Face OSC did a great job mapping my eyes. After creating this initial code, all that remained was to project this onto a surface.

My original plan to control the servos in real time was to make a processing sketch that used 3 dimensional skeletal tracking via the Microsoft Kinect and translated that into servo-friendly values, that would then be sent to Arduino and used to drive the servos. Though seemingly complex, the theory and implementation are easy enough. However, for this first iteration, I instead used a processing sketch that did not incorporate skeletal tracking. Instead, an in program GUI would allow you to control the servos. This is how you could control the first iteration’s arms.

I presented this first iteration to the class to help the design and give further feedback. Unfortunately, nine-gram servos are generally low torque, and have very weak nylon gears (which are prone to breaking).  The feedback I received was ultimately positive, because my peers understood that I was still in the design process. Generally, they understood what I was trying to accomplish and saw that the basics were there.

The improvements that I noted from this testing and iteration were the following. First, the realistic mouth PNGs were not achieving their intended goal, because they were not robot-looking and weren’t mapping properly. The next was to use stronger, better servos. In addition, the arm motion was quite buggy and inconsistent, because I was not supplying each servo with enough power. In the next iteration, this would have to be improved

Second Iteration

This time around, I used longer pieces of PVC piping to construct the second iterations frame. After building this version, I spray painted it silver, as much of this frame would be visible. I drilled two holes into each of the frame’s “feet” so that I could later screw it into a frame.

My initial idea for the animatronic’s head was that it would look “robotic” and have a cute face which allowed me to project on it’s eyes and mouth. Physically, this meant that the eye and mouth area needed to be smooth, matte, and projection friendly. So long as these requirements were met, then the rest of the head design could be purely aesthetic. In Tinkercad, I designed a head that resembled my initial sketches in a more stylized and 3D printable manner.

For the arms, as previously stated, I needed to use larger servos. I upgraded to the MGS-960, as this was a cheap alternative. This servo has metal gears and higher torque, allowing the animatronic better articulation than before. To structurally adhere these servos, I used braces specifically designed to hold servos of this size. I designed the arm so that one servo would serve as a shoulder and control the shoulder’s “yaw”. Attached to this was a secondary shoulder servo, which would only control the shoulder’s “pitch”. This second servo would be attached to braces representing the animatronics’ upper arm. At the other end of the forearm was the third servo, which functioned as an elbow. This servo connected the upper arm to the forearm and hand. I used the braces to construct the arm and after tediously attaching the servos, managed to make this configuration work with the previous arm control design.

I purchased a computer’s power supply to power the six servos. This power supply could supply 5 volts at 12 Amps, which was ideal because each servo requires above 1 Amp and 5 Volts, allowing the servos to operate under nominal conditions. This meant that they would turn, move, and otherwise rotate better than the 9-gram versions.

The next iteration of the facial mapping code changed very little. I removed the mouth PNGs in favor of five mouth “vents”. These vents would grow longer and glow orange when the operator opened their mouth, and turn grey and grow short when not speaking.

Now, the most notable aspect of this second iteration was the new code used for controlling the arm. Initially, my plan was to use the Microsoft Kinect to control the animatronics’ arms. By using the Kinect in tandem with Processing, I should theoretically be able to use skeletal tracking and translate values of the arm and joints to servo friendly values, which would then be utilized by Arduino. Unfortunately, this failed. I cannot overstate the pain I went through to try and get this working, but I will document this as follows. While the Kinect and Processing can work together on mac IOS, the skeletal tracking library does not come included in this package. In order to attain skeletal tracking, one must use a PC. So, after figuring this out, I went to buy a 64-bit PC that ran 64-bit version of Windows 10. Unfortunately, this computer failed to work, as did the next two PCs (with identical specs). So, taking this as an omen that the skeletal tracking was not for this project, I cheated and created a new GUI for controlling the skeleton.

Using light tracking with processing, I was able to attain the x-y coordinates of any color I chose to track. This meant that, say I used a flashlight for instance, I could track the x-y position of that flashlight and record those values. I elected then to use two flashlights, one for each hand, so that I could get the x-y coordinates for each hand. Then, I created a scheme of reverse kinematics, such that values would be generated that represented the shoulder, elbow and wrist based off of the color tracking. This required tweaking, but once done, I managed to successfully generate values for the servos.

I laser cut a base to hold the animatronic, and attached the animatronic’s frame to it. In addition, I took some scrap and fashioned them into aesthetic components to embellish the animatronic’s body. After spray-painting the base and body parts to match, I assembled the physical body, and was ready to bring everything together.

Finally, I hooked everything up (the servos to power and Arduino; the Arduino to the computer scanning the body via processing; the projector to the computer running the processing sketch scanning my face). After some more adjustments in the space, I filmed myself with all components of the second iteration working with a sufficient degree of success.

This final iteration gave me much to think about. While I was happy with the final design, there are still many aspects to be fine-tuned for the next iteration. I plan on completing one more, which will be being an improvement on all the others.

 

Network Everything: reaDIYmate

Networked projects can be fun and often indicative of the greater IoT utility. While many proponents of networking everything might praise these technologies’ utility above all else, many artists and hackers seek to use these technologies instead for their aesthetic value. For instance, the reaDIYmaker (as shown above) is a project whereby users can assemble a networked Arduino and connect it to whatever internet application they so desire. While this is pretty run of the mill as far as IoT goes, this project’s novelty lies in the interactive aspect. Each reaDIYmate is an Arduino “robot” that in some way responds to your internet activity. For example, say you program your reaDIYmate to respond to all incoming tweets. The reaDIYmate robot will then light up and make noise every single time somebody tweets you, providing a physical and interesting interface with your personal network. I like this project because I have a vested interest in Arduino automatons, and believe that this project serves as a simple, ridiculous, but very relevant instance of networking devices and using a microcontroller to emulate a reaction to these connections.

Network Everything: Spring Break Networking by Nick S

Over this break, I pretty much stayed in Shanghai to work on capstone and virtually lived on campus. However, during the break, as I was tasked with working with computers, I had little time to do things like buy groceries and go home, which meant that I had to buy food and computer parts all break. What is most notable about this experience is that I began using Alipay or Wechat during this break, and noticed the network prevalence surrounding these new technologies.

I needed to buy a computer, and thus made various trips to Baoshan road. Here, I consulted the various vendors but and eventually had a computer assembled. Out of cash, I used my Alipay to buy the computer. To my surprise, each vendor accepted Alipay, and made this process easy. What’s more, each and every meal I bought was paid for by either Alipay or Wechat pay. Needless to say, I pulled out my phone to scan these signs almost all break.

This new modular economy presents something resembling a network economy, because it revolves all around people and their technologies. By using their phones, these vendors and consumers and make exchanges via smart technology. These transactions, like a mesh network, act like a mesh economy. WechatIMG1 WechatIMG2

Nicholas Sanchez- Junji Ito’s Horrorific AR

WechatIMG1

Me posing with Junji Ito’s cutout at the exhibit

Augmented Reality technology (AR), while not novel or new, still has a capacity for interesting and useful development, ranging from pragmatic devices like a car’s heads up display, to art exhibitions. What’s more, the wide proliferation of cell phone technology and AR headsets, along with readily code able AR software, has allowed for this technology to be designed my different people and for different purposes.

WechatIMG2

Instructions on how visitors should use headsets

One such example is Junji Ito’s AR horror exhibition at the Modern Art Museum in Pudong. Here, visitors were given AR headsets embedded with cell phones. when worn, viewers would be able to see the exhibit around them as seen through the cell phone. However, when looking at a picture on a wall, the image would suddenly change into an animation with sound. This was a very interesting way of experiencing Ito’s work, and did provide an added level of uniqueness. Outside of the initial AR section of his work, patrons could see snippets from his manga, view his process, and experience other elements of his unique horror style.

WechatIMG4

Viewers seeing an AR image with their headsets

However, the greatest take away from this exhibit was the AR. While I would say that the AR enhanced the experience definitively, much of the “horror” aspect was absent. I attribute this to various factors surrounding the exhibits ambience. For instance, the exhibit was well lit and bright, crowded, noisy, and had a comfortable ambient temperature. These factors, which make for a good art gallery, also make for a poor setting for horror. Each of these factors not only made me comfortable and calm, but also failed to allow the images to instill any fear within me. What’s more, the finite number of headsets meant we had to share amongst our group. This meant for everyone to see, we had to take a headset off and on again for each picture, taking us out of the illusion and grounding us in the pleasant setting of the gallery. The lesson here is that setting is imperative to instilling fear in viewers.

While the physical gallery itself was not horrific, the content and art was quite disturbing, and effective on the semiotic level. I think AR can be used to enhance or instill users with horror given the proper setting and programming, and would warrant further investigation in future projects.

Nicholas Sanchez-Arduino Reaction Gauge

To gauge fear is an interesting idea, as this human emotion is not merely a state of mind; it manifests as a plethora of physiological responses and reactions. However, based on correlations between heightened states of awareness and some biologically related functions (like heart rate) we can determine with some degree of confidence their response, and by proxy, fear in certain situations. Based off of such qualitative responses, we can help gauge one’s feelings given certain stimuli.

To try our hands at determining these responses, we attempted to observe Galvanic skin response provided various stimuli. The idea behind Galvanic skin response is that human skin’s resistance to electricity, which is generally within the megaOhms, will decrease notably when the human in particular is “aroused”, or in a state of heightened senses. For instance, the theory suggests that if a person is fearful or scared, they are aroused, and thereby their skin’s natural resistance would descend below the normal levels. However, this method of gauging fear is far from exact or scientifically sound, and thereby its results are, to a large degree, speculative and interpretive. Nevertheless, this was what we had available to us, and so we used it to attempt gauging fear.

To test this, we made a simple analog circuit whereby two wires were connected via conductive tape to a person’s fingers. one wire would go to a junction where one wire went to an Analog input pin, and another connected a 500 kiloOhm resistor to ground. The other fingered wire connected to 5 volts. When observing this through the serial port, we could observe a distinct average conductivity for each person in a relatively static state. Still, variances could be observed if the person wearing the wires had a different emotional stimulus response. Then, we made Arduino communicate with processing to record all data.

My fingers connected to

My fingers connected to

Schematic of the Arduino circuit

Schematic of the Arduino circuit

An issue with the code was that it failed to record data over extended periods of time. Therefore, for the video I was watching (which was about three minutes long), data failed to be saved. This is an interesting idea for gauging fear. It will be interesting to see how these methodologies play out this class.

 

This week we worked on collecting data on an Arduino and storing it via SD card. The task was not just to collect data and write it to the SD card, but also to create some graphic to visualize that data. We began this process by testing the the SD read and write functions to Arduino. Initially, trouble began when trying to have an Arduino communicate with an uncleared SD card. So after some formatting of that SD card, we were able to get Arduino to write to it. From here, we hooked the Arduino up to an ultrasonic sensor and SD reader. We managed to code the Arduino so that it would record the sensor’s readings five times each second and, based on a trigger depth of  approximately 150 centimeters, whether the Arduino was triggered or not.

SD1

Arduino hooked up to ultrasonic sensor and SD reader

This was completed and the next step was to give it housing. We found a discarded box that fit all components, including a LiPo batter. After placing these inside the box and gluing them to the insides, we placed it on a table outside the IMA studio to gauge a person leaving or exiting.SD5
SD2
SD3 SD4

Despite the activities, the box hid any feedback mechanism to display what the sensor was gauging. As there was no external feedback loop to show whether the sensor detected a person, we opted to add an LED to the circuit. This LED would be green when nobody was detected and turn red when someone moved within the Ultrasonic sensors distance trigger.

After collecting data for a little bit over an hour, we took the SD card out and coded a Processing sketch that analyzed the data from the SD card. We changed the file into a CSV, as this was what online prescriptions recommended. In addition, whereas the txt data seperated each point by a comma, we changed those to spaces, so that Processing could “split” each string into a time, value, and trigger variable. Below are two codes that show the finished product of this endeavor.