Nicholas Sanchez: Capstone Documentation

Introduction

Animatronics are, for all intents and purposes, the manifestation of puppetry in today’s age of media and technology. Whether pre programmed to run the same act or controlled in real time by a distant puppeteer, animatronics fill the niche once occupied only by marionettes, ventriloquist dummies, and shadow puppets. However, unlike its low-tech predecessors, the animatronic is composed of cutting edge and novel technology, and by no means limited to the physical restrictions present in traditional puppet theatre. By contrast, the Animatronic is embedded with new technologies while still maintaining the essence and ontology of the “puppet”.

With the aid of the IMA faculty, staff, and resources, while being equipped with the knowledge acquired over several years studying in this program, I set out to create my own animatronic; a puppet that, like those iconic and relevant to older puppetry, represented the intricacies and magic of the art form. At the same time, this puppet’s physical would vary from the archaic standard, as its corpus and means of manipulation would be largely filled by newer technologies.

The purpose of this capstone project was to create an animatronic that could be controlled in real-time via computer vision and projection mapping. The goal was to create a puppet that’s actions would mirror those of the puppeteer with minimal latency and accuracy, while its facial expressions would be mapped to those of the puppeteers. Using Arduino, Processing, Servos, and other tools, I set out to achieve this goal. And while the most recent iteration may not be pretty, it is my belief I achieved what my expectation was with a relatively high degree of success.

Pre Production

From my youth until now I have been fascinated with robots and robotics. I can remember the cartoons watching as a boy, and seeing the fantastical idea of what an autonomous mechanical being could be. I loved anime like Gundam, where pilots flew in gigantic armored robots, as well as movies like Terminator, where humanoid chrome cyborgs walked the Earth. Birthday parties would often take place at Chuck E. Cheese, where a band of animatronic animals sang and advertised pizza. Even my trips to Disneyland would fill me with awe, as I was amazed and enamored with prate, ghost, and animal animatronics. Needless to say, this was a field I have found interesting for a very long time.

Disney’s animatronics were of particular interest to me, due to their nuances and theatricality. For decades, Disney has been a pioneer and leader in the development of animatronics, and virtually invented the field. While the earliest animatronics were nothing more than simple and rather uninteresting animated animals, over the years Disney has conceived some of the most articulate automata in history. Most notable, of course, is the “Buzz Lightyear” animatronic in Disneyland’s “Astro Blaster’s” attraction. Here, a life-sized Buzz Lightyear stands before guests waiting to board the ride and provides contextual exposition. While the novelty of seeing Buzz in real life is unique, its most notable aspects include its design and nuance.

The Buzz Lightyear animatronic differs from its sibling animatronic because it is a hybrid of traditional animatronic composition and new media. While Buzz’s body articulates via some mechanical process (most likely pneumatics), his face is completely projected without any physical actuation. Imagine a movie canvas, but curved to match the shape of a human face. That was the head of this animatronic, and a hidden projector would project a face onto the head, creating the illusion of a fully articulate mouth and eyes.

 

What I loved about this animatronic is how the combination of physical mechanism and projection mapping really bring this character to life. The arms and torso, which move in real life, give the animatronic form and life. Yet, the projected face truly conveys the essence of the character, and creates the illusion of reality. When I later decided to make an animatronic for this capstone, I decided that whatever I would build should emulate Buzz’s structure.

I began this project as I do with all my projects; by pondering the idea behind it and how the project might be accomplished.

I started thinking about things that interested me, particularly with regards to digital fabrication. For instance, topics such as laser cutting and 3d printing were notions I considered expanding upon. However, in the end, my love for robotics prevailed.

Having already worked with robotics, I decided that I would draw upon previous experience to create a capstone relating to this subject. Once this was decided, my challenge then became thinking of what type of robot I should build. While autonomous and self-sufficient robots are appealing, I thought that I might perhaps focus on the more artistic and per formative robots: Animatronics!

Drawing from my adoration of this subject, and remembering some incredible examples (like Buzz Lightyear), I decided that would in part be mechanical, in part be projected. However, although I could ideate this animatronic’s physicality, I still needed to decide how I would control it. Questions arose such as whether it should be pre programmed or controlled in real time? If controlled in real time, what hardware and software would I need to do this? Ultimately, I decided on creating an animatronic that could be controlled in real-time and move its arms based on the operator’s arm motions, so as to mirror them. In addition, the animatronic’s face, which would be projected onto the head much like Buzz Lightyear, would blink and open its mouth in relation to the puppeteer’s eyes and mouth.

Now that I had an idea about what I was going to do, I decided to draw on past projects for inspiration and the knowledge to accomplish what I was working on. As stated above, I planned on making an animatronic with moveable arms and a projection mapped face. I scoured sources like Instructables.com for ideas and methods, and reflected on my past projects had been successful and pertinent to this capstone.

In the summer of 2016, I was a student researcher in NYU Shanghai’s DURF program. In this program, a team of peers and I worked to create a robotic arm that could be moved by tracking a user’s head. The crux of this project was the robotic arm, which was constructed from 6 high torque servos and driven via Arduino. My experience with this project allowed me to work with multiple servo control using Arduino programming and serial communication. Given this experience, I already had the foundation for how to build and operate two robotic arms for the animatronic.

In terms of how to control the arms in real-time, the challenge became considering the best and most efficient methods for mapping out human arm motion while also maintaining a workflow to send that data to the Arduino. My initial idea was to use Microsoft’s Kinect in conjunction with the software Processing, as there have been libraries and examples made whereby the two work in tandem to record a user’s arms and joint positions. Utilizing the Kinect’s infrared sensor, Processing can generate a 3D image of a user and attribute a superficial skeleton to their structure. My hope was to capture the positions and angles of those joints in the arms, like the shoulder, elbow, and wrist, so that I could then translate this data into servo friendly programming.

Unfortunately, as you will see later, this process didn’t work out because, despite my best efforts, this Processing Kinect dynamic can only work when using a 64-bit PC, running 64 bit Windows 10. Despite my best efforts, which included using several different PCs and even buying my own just to operate this system, for one reason or another, establishing the skeletal tracking failed.

Now, the ability to control the Animatronic in real-time was an integral component of this project. So my remedy to using the Kinect was to instead use Processing’s built in computer vision abilities to track light. The idea was, if I held flashlights in both of my arms and created a Processing sketch where the values of those lights where captured, then I could create my own version of skeletal tracking. To create this code was simple, as I had already done a project that captured lights and captured the light’s x, y coordinates as perceived by the computer’s camera. By adjusting this code for the purposes of my animatronic, I could have a good starting point for user arm control.

Similarly, I had already done a project with projection mapping and face mapping in the previous semester. My “Singing Spooky Pumpkins” project included a host of Pumpkins that sang the cultural classic song “The Monster Mash”. In this project, one of the pumpkins sat idle until a user sang along with the song, at which point the pumpkin would also “sing”, as its eyes and mouth moved based on the singer’s facial expression.

Using the knowledge gained from research and my own prior projects, I set out to build this animatronic.

Now that I had an idea about what I was going to do and how I might accomplish that, I began to get organized by focusing on what I needed and how to best schedule my time.

As far as materials go, I knew I needed the following:

  • Servos
  • Servo holders
  • A power supply (for the Servos)
  • An Arduino
  • PVC pipe for the animatronic’s frame
  • Miscellaneous circuitry (perfboards, wires, soldering, etc.)
  • Laser cutting material
  • Hot Glue
  • Screws and bolts
  • A power supply
  • Spray paint

 

For the softwares, I would need:

  • Processing (to map face and arms and send that data to the projector and servos, respectively)
  • Arduino (to drive the servos)
  • Face OSC (for face recognition)
  • Tinkercad (for 3D modeling)
  • Adobe Illustrator and Photoshop (to design the animatronic’s face and body
  • Mad Mapper (for projection mapping)

 

For computing, controlling, and presentation, I would need:

  • A computer with 64 bit processor and camera (two if you are using one to read the face and the other to map the arms)
  • A projector

 

For fabrication, I would need:

  • A laser cutter
  • A 3D printer
  • Drill, dremmel, hack saw, and grip or vice

In addition to these, I also needed to organize the project into achievable sections. So, I divided each aspect of the animatronic into a category that could be achieved independent of the others. The idea was that by separating these aspects, I could “divide and conquer” in a timely manner. The categories I conceived were thus Face (facial animation, facial mapping, and projection) and Body (construction, servo arrangement, servo control, body tracking, and aesthetic). After considering this, I set out on my constructive journey.

Production

The animatronic could be divided broadly into two categories; it’s face and its body. Each of these categories brought about its own challenges in terms of construction and design. I created some 3D models to help myself visualize and better understand what I would build. These models helped me in many ways.

For instance, I knew that I wanted to build a head with two eyes and a mouth that would be projected on to. I knew I needed a physical frame or skeleton, and wanted an exoskeleton to cover these parts. The arm, which would need to move to emulate the puppeteers, needed to have three points of motion, or degrees of freedom; two in the shoulder, and one in the elbow. I think that my 3D models provided a pretty basic but explanatory idea of what the animatronic would be. It provided a springboard from which I could further my development.

As there had been nothing physical constructed (all I had were 3D models at this point), I was required to build a paper prototype. The idea of the paper prototype is to create a model sans technology, to give others an idea of how it should work, while also articulating the basic fundamentals of its ability. To make my paper prototype, I used spare boxed and cardboard to create a body. I knew that the arms needed to bend, so I cut the arm boxes where the “elbows” would be, and used a flap along the “shoulder”. Attached to this flap were some empty soda bottles, which fit conveniently into the shoulder socket. This allowed me to rotate the shoulder and added that final degree of freedom.

For the paper prototype’s head, I cut a soft ball in half and glued them to the head. I added a paper nose, and then used sticky notes with different mouth positions to simulate what the real projection might show.

I showed the paper prototype to my classmates and received valuable feedback from the group. For instance, I learned that the Wii Nunchuk, which I originally had planned on using to control the arms, was actually not a very good medium. Therefore, I was advised to use the Microsoft Kinect instead. In addition, this model was rather big, and I was encouraged to scale it down. After this session, with this advice in mind, I set out to build the first iteration of my animatronic.

First Iteration

Now the first iteration was by no means a finished product. In fact, I designed it to be more of a “proof of concept” than an actual model. The idea was to get a working model functional, not pretty or high quality. So, using cheap materials and scrap, I began designing it. I started out by buying some PVC piping for the animatronic’s skeletal frame. Unfortunately, the first pipes were way too big, so I returned to the hardware store to get some smaller tubes. Using these tubes, I build the basic frame easily.

For the animatronic head, I used a discarded box and plastered one side with polymer clay. This polymer clay was molded to resemble my initial 3D model’s face.

Now, initially I wanted to use nine-gram servos to be the animatronic’s arms. So, using Tinkercad, I designed several holders for these servos, as none currently exist. After a few iterations, I finally decided on a design and printed them out. I assembled these arms and placed them on the animatronic. After I assembled the arms, all that was left was to

For the facial projection, I needed to create some basic assets that could be animated in Processing. Using Adobe Photoshop I created “eyes” and “mouths” for the animatronic, and exported them s PNGs. there were seven PNGs for the eyes, each to represent a different level of openness. For instance, there was one PNG for when the lids were closed, and one when the lids were wide open. In addition, I made a mouth to represent the mouth positions for all nine phonetic sounds English speakers make while speaking.

Using Face OSC with Processing, I was able to map several different points on my face. For instance, Face OSC would map the levels of my eyebrows and mouth, and Processing would receive and interpret these points, translating them into the Cartesian x y coordinates. I created a processing code that used these points and then mapped the PNGs from the face to the appropriate corresponding levels. For instance, if my eyes were opened wide, then Processing would display the PNG depicting the widest eye. If my mouth were to make an “a” sound, Processing would show the PNG depicting the “a” sound. Unfortunately, do to Face OSC’s buggy nature, the mouth mapping didn’t work too well, and therefore failed to depict the right mouth at the right time. Despite this, Face OSC did a great job mapping my eyes. After creating this initial code, all that remained was to project this onto a surface.

My original plan to control the servos in real time was to make a processing sketch that used 3 dimensional skeletal tracking via the Microsoft Kinect and translated that into servo-friendly values, that would then be sent to Arduino and used to drive the servos. Though seemingly complex, the theory and implementation are easy enough. However, for this first iteration, I instead used a processing sketch that did not incorporate skeletal tracking. Instead, an in program GUI would allow you to control the servos. This is how you could control the first iteration’s arms.

I presented this first iteration to the class to help the design and give further feedback. Unfortunately, nine-gram servos are generally low torque, and have very weak nylon gears (which are prone to breaking).  The feedback I received was ultimately positive, because my peers understood that I was still in the design process. Generally, they understood what I was trying to accomplish and saw that the basics were there.

The improvements that I noted from this testing and iteration were the following. First, the realistic mouth PNGs were not achieving their intended goal, because they were not robot-looking and weren’t mapping properly. The next was to use stronger, better servos. In addition, the arm motion was quite buggy and inconsistent, because I was not supplying each servo with enough power. In the next iteration, this would have to be improved

Second Iteration

This time around, I used longer pieces of PVC piping to construct the second iterations frame. After building this version, I spray painted it silver, as much of this frame would be visible. I drilled two holes into each of the frame’s “feet” so that I could later screw it into a frame.

My initial idea for the animatronic’s head was that it would look “robotic” and have a cute face which allowed me to project on it’s eyes and mouth. Physically, this meant that the eye and mouth area needed to be smooth, matte, and projection friendly. So long as these requirements were met, then the rest of the head design could be purely aesthetic. In Tinkercad, I designed a head that resembled my initial sketches in a more stylized and 3D printable manner.

For the arms, as previously stated, I needed to use larger servos. I upgraded to the MGS-960, as this was a cheap alternative. This servo has metal gears and higher torque, allowing the animatronic better articulation than before. To structurally adhere these servos, I used braces specifically designed to hold servos of this size. I designed the arm so that one servo would serve as a shoulder and control the shoulder’s “yaw”. Attached to this was a secondary shoulder servo, which would only control the shoulder’s “pitch”. This second servo would be attached to braces representing the animatronics’ upper arm. At the other end of the forearm was the third servo, which functioned as an elbow. This servo connected the upper arm to the forearm and hand. I used the braces to construct the arm and after tediously attaching the servos, managed to make this configuration work with the previous arm control design.

I purchased a computer’s power supply to power the six servos. This power supply could supply 5 volts at 12 Amps, which was ideal because each servo requires above 1 Amp and 5 Volts, allowing the servos to operate under nominal conditions. This meant that they would turn, move, and otherwise rotate better than the 9-gram versions.

The next iteration of the facial mapping code changed very little. I removed the mouth PNGs in favor of five mouth “vents”. These vents would grow longer and glow orange when the operator opened their mouth, and turn grey and grow short when not speaking.

Now, the most notable aspect of this second iteration was the new code used for controlling the arm. Initially, my plan was to use the Microsoft Kinect to control the animatronics’ arms. By using the Kinect in tandem with Processing, I should theoretically be able to use skeletal tracking and translate values of the arm and joints to servo friendly values, which would then be utilized by Arduino. Unfortunately, this failed. I cannot overstate the pain I went through to try and get this working, but I will document this as follows. While the Kinect and Processing can work together on mac IOS, the skeletal tracking library does not come included in this package. In order to attain skeletal tracking, one must use a PC. So, after figuring this out, I went to buy a 64-bit PC that ran 64-bit version of Windows 10. Unfortunately, this computer failed to work, as did the next two PCs (with identical specs). So, taking this as an omen that the skeletal tracking was not for this project, I cheated and created a new GUI for controlling the skeleton.

Using light tracking with processing, I was able to attain the x-y coordinates of any color I chose to track. This meant that, say I used a flashlight for instance, I could track the x-y position of that flashlight and record those values. I elected then to use two flashlights, one for each hand, so that I could get the x-y coordinates for each hand. Then, I created a scheme of reverse kinematics, such that values would be generated that represented the shoulder, elbow and wrist based off of the color tracking. This required tweaking, but once done, I managed to successfully generate values for the servos.

I laser cut a base to hold the animatronic, and attached the animatronic’s frame to it. In addition, I took some scrap and fashioned them into aesthetic components to embellish the animatronic’s body. After spray-painting the base and body parts to match, I assembled the physical body, and was ready to bring everything together.

Finally, I hooked everything up (the servos to power and Arduino; the Arduino to the computer scanning the body via processing; the projector to the computer running the processing sketch scanning my face). After some more adjustments in the space, I filmed myself with all components of the second iteration working with a sufficient degree of success.

This final iteration gave me much to think about. While I was happy with the final design, there are still many aspects to be fine-tuned for the next iteration. I plan on completing one more, which will be being an improvement on all the others.

 

assignment 29/4– Rewant Prakash

Activity Analysis:

I was ambitious at first and tried to order food on my phone using voiceover (for iphone) on the food delivery app(ele.me). Also, to make sure I am not cheating, I used minimum brightness and made sure I didn’t really see what I was doing on the phone. While just playing around with this functionality earlier, I learned several shortcuts (such as the two-fingers swipe down for reading all apps on phone left to right, two-fingers tap for item chooser etc) that were very helpful for me to find the app, but it took me a while to get to the app and open it successfully. I would often accidentally open wrong app or forget different gesture functions. When I finally opened the app I was unable to understand most of the things not because everything was in chinese, but because for every thing it would say out loud, it would add an button at the end, which was quite disorienting, but more importantly, the labels for image were just image and didn’t really quite say what the label of the image/ content was. When I was trying to place order, I got stuck at the popup menu where you select the size portions and add-to-cart feature. In a nutshell, my takeaway from this exercise was that not everything on your phone is accessible, and that the app makers need to make their apps more accessible in user interface.

Everyday technology user chart:

Screen Shot 2017-05-02 at 3.13.56 PM

 

Cerecares field trip:

I found the trip to Cerecares incredibly educational in so many ways as most of my knowledge about the users of assistive technology, in general, was purely theoretical. I have previously read about cerebral palsy, its symptoms, causes and management etc. and even met extended family member with this disorder individually, this was the first time I visited a facility that takes care of kids with this disorder. It was interesting to see how much effort and resources goes into not only helping this kids adapt to everyday functions of life, but also maintaining the facility altogether.

The founder of this facility  has previously tried several treatments from all around the globe, but ultimately realized that the traditional chinese methods of acupuncture and massage therapy was the most effective. She, along with her sister, established this institute where this treatment is used to help kids with cerebral palsy, in addition to other exercises and practises. Furniture inside the facility ranged in variety and custom-designed for the kids, for example the chairs had a partition to maintain space between kid’s legs to help proper development as well as to prevent them for falling down.

When asked if the government helps the institute in any possibly way, we learned that the government is not very supportive as per se. I wasn’t surprised as I believe that South-east asian countries, in general, are very behind in providing assistance to people with disabilities through their policies, grants and infrastructure.They need a lot of catch up to when compared with many countries in the west, and upgrade infrastructure and provide monetary assistance. Furthermore, our society needs to needs to get educated and change its mindset when it comes to treatment of people with disability.

 

Lab11 (Moon): Drawing Machine

This Friday I worked with Kathy from my session, Jacob Park and Andrew Huang from another session to build a drawing machine.

Kathy and I first assembled a mechanical arm with a step motor.

IMG_0509

Then we build the circuit with aSH75440NE ic chip, a 5-volt power supply, a arduino uno and a potentiometer according to the instruction given.

bipolarStepperMotor


IMG_0512
The circuit looked like that. And then we uploaded the pre-installed Arduino Stepper Library to control our stepper motor. The only difficulty we encountered was to control the speed of the motor, but as we played with it for some more time we became more familiar with the machine.

 

We then worked with Jacob and Andrew to build the drawing machine.  When both of our parts were put together, we put the pen in and started to let it draw. Here are how the machine and the drawing looked like:

Screen Shot 2017-04-30 at 10.39.32 PM

IMG_0514

Aren’t we just so proud of our (drawing machine’s) talent in art!

Response to Computers, Pencils, Brushes (Chen)

  • Response to Computers, Pencils, Brushes
  • By: Amber (Yutong) Lin

I agree with the author on the point that computer indeed changes the way of human perception.  I was deeply triggered with this following part: “We need to know in what ways it is altering our conception of learning, and how, in conjunction with television, it undermines the old idea of school…New technologies alter the structure of our interests: the things we think about.”

The author sees the problems in design, especially in design education, doubting if the over-use and too much dependence one the computers to create “artificial,” “digital”, somehow “well-calculated” or “well-coded” changed the nature of design or not. Coding and programming are colder and more emotionless on the screens compared to design manually. More importantly, the way of appreciating and enjoying the design work from audiences is transformed. Design has become faster and faster, in a sense the computers makes everything of design easier, but also shorter-living in the fast-changing age. “Ephemeral” is the word to describe the changes. 

For example, the phenomenon of the homogenization of commercial design arises recently. In the digital age like now, personally, I am more concerned about how human emotion is stirred up by the contacts and exposure to the designs. Under the shadow of marketing-based design, the designers are becoming specialists who can detect the smell of profit using techniques to attract its potential audiences, the more specific the target is, the more identical and similarly, the design become. It is truly sad in a sense that the reproduction and commodification are influencing the nature of design.

I am not suggesting that it is caused entirely by the use of computer, there are indeed more open space and possibilities for design by the assistance of computer. Designers are equipped with such a powerful tool to realize their ideas in mind. I always believe design is hereby to promote interactions and emotion between humans with authentic aesthetics. The tool is not that important after all, not matter it is a pencil or a computer, as long as people love and appreciate it.

 

Week 9: Response to “Hackers and Painters” (Chen) Tim Wu

If it wasn’t this article, I would never make a connection between hackers and painters. That is probably because one of the popular images of hackers is unsociable programmers sitting in front of a laptop in the dimly-lit room, trying to break down the network of some organizations or even entire town. If I would picture hackers as painters, the canvas they use is probably the computer screen with greenish, blinking digits of 0 and 1. However, after reading this article, and combine the thoughts of the author with my own experience of coding, I start to see how creative the role of hackers needs to be and how it is different from traditional computer science jobs.

One thing that stood out to me is the author’s emphasis on empathy. He claims empathy is what distinguishes good hackers from less good ones. After all, most of the products, be it a software or computer language, are going to be used by people with less knowledge background in the field of computer science. The ability to design products enjoyed by all becomes the core. Good hackers are not hacking just for fun. They see a problem that a lot of people face, solve it in an untraditional way, and pack the solution in a user-friendly form. It is like a painter. A painter paints not only to express himself or herself but to represent a common human experience as well.

Another idea in the passage also lights up a bulb in my mind. That is what comes first: original work or good work? The answers from hackers and scientists are different. Though I know there must be something different that makes hackers so unique yet contribute to the technology advances a lot, I have never thought of the differences in terms of workflow and initial goals. Hackers seek to make something new first and improve by learning from frequent mistakes while scientists learn systematically first so they can avoid mistakes and aim for developing something original. Both strategies work and I believe they have their own best place to be implemented. For basic science, I assume it’s better to stick with the traditional scientist workflow because the cost of getting the basic assumptions wrong is too high for scientific research. But for hacking a product or service, it is probably better to iterate faster and adjust according to feedbacks because it is targeted towards the audience.

When talking about the reasons why big companies often fail to encourage hackers to contribute more to the development of software, I think the author makes a very good observation when he comments that the companies can hardly “pick out” the best hackers and thus take a less risky approach to have hackers “implement” the ideas but not to design the software.

 

 

Lab 11 – Drawing Machines

April 28th, 2017

Instructor: Moon

Partner: Marina Pascual

Supplies: 1 stepper motor, 1 SH75440NE ic chip, 1 5 volt power supply, 1 power plug, 1 potentiometer from your kit, 1 Arduino and USB cable from your kit, Laser-cut mechanisms, Pens that fit the laser-cut mechanisms

Aim/Goal

  • Create a circuit using an H-bridge that will control a stepper motor. This stepper motor will be combined with another group’s stepper motor to create a drawing machine using parts made from a laser-cutter.

Steps

  1. Create the circuit as seen in the following diagram:

    Our Circuit

    Our Circuit

  2. Open up Arduino IDE and use the ‘stepper_oneRevolution’ example as your code. Then test it on the circuit you’ve made.
  3. Assemble the laser-cut mechanisms that will form one-half of the drawing machine.

    Laser-cut mechanisms

    Laser-cut mechanisms

  4. Combine your parts with another group in order to create mechanical arms that can hold a marker/pen. This will function as a ‘drawing machine.’
  5. Place a pen/marker at the end of the mechanical arms and tape down a piece of paper for it to draw on.

    Drawing Machine

    Drawing Machine

  6. See what it draws!

    Drawing Machine Masterpiece

    Drawing Machine Masterpiece

  7. Experiment with a potentiometer and the ‘stepper_speedControl’ code to control the speed and direction of the drawing machine.

Conclusion/Lessons Learned

Overall I thought this was a really fun lab that allowed me to better understand the way a stepper motor works. I was aware that using the H-bridge improperly could cause overheating in the circuit, which is why I asked a student helper to check the circuit. Unfortunately, a faulty Arduino caused my computer to short-circuit. This slowed down our progress significantly, as we were unable to add a potentiometer to control the speed and direction. We did finish making the machine though, and managed to get a modern art-esque drawing from it. The pen would often come loose which could be a little frustrating, but eventually we got it to stay. I would recommend taping down the pen in addition to the paper. I would definitely like to experiment with a potentiometer and the ‘stepper_speedControl’ code in the future because the machine was a little hard to control sometimes.

Week 10: Response to “Hackers and Painters” (Szetela)

In the world of interactive media and arts, there is no clear boundary between hackers and painters. It is not just because of the application of digital technology in artistic creation in IMA, but also because of the similarities between the two, as stated by Paul Graham in his “Hackers and Painters”.

The first thing they have in common is that they are all “makers”, which means that they emphasize on “what to do” and “how to do” rather than pure theory. Indeed, JavaScript is a very powerful language, but what we learn at Comm Lab is not using JavaScript to achieve Object-Oriented Programming, but using it to create attractive and interactive websites and projects.

Another one is that they both learn by doing. Indeed, there are a lot of demos in class, but in most cases we learn by looking up W3School tutorials when we face technological issues during projects. I have to admit that deadlines are the best motivation for work, and work is the best motivation for studies. Besides, hackers and painters both learn through examples. We also referred to sample websites made by professors when we practice.

In addition, both of them create through gradual refinement. That reminds me of our video project, during which I edited and polished my videos for countless times and eventually turned two array of raw clips into two videos with zoom-in effects, smooth transitions, and background music.

In his essay, Graham argued that sometimes hackers (and painters) should take cycles into account and prevent over-excitement and stalling that occurs after accomplishing easy tasks. That is especially true during my audio project, when I felt under-motivated when I finished the positioning of major elements.

From my perspective, Graham actually made many good and useful suggestions for us interactive artists. Although it was written a decade ago, his essay is still inspiring and worth reading.

CL-Week 11: Response to “Computers, Pencils, and Brushes” (Vasudevan)

The development of computers has gradually changed our impression on design. In contemporary time, when we talk about design, most of us are imaging 3D modeling, digital drawing, typesetting and so on. As those technologies are more frequently applied and exposed to public, they seem to become representatives of design.

This article encourages us to reconsider the association between design and computers. It directly points out that a computer does not equal to design: the former is a tool, a device, a machine, while the latter is a curriculum, an art, a mode of thinking and practice. I agree with the author’s idea that computers can never replace design. Computers can be put into the same category of tools as pencils and brushes (see the title of the article), though they are much more advanced and powerful. Pencils can draw rough drafts in black and white; brushes can bring color to pictures and create more details; computers can deal with digital items and produce unique visual effects such as blur and filter. If we compare the three tools in this way, it will be easier to understand that computers are actually a technology. Meanwhile, design is general and abstract, requiring both basic hand-design skills and special thinking modes. The most valuable parts are our ideas in mind, and next we must realize when and how to use different tools. Good use of different tools can maximize their unique features, and thus make our ideas come true.