Recitation 10 – Annabel Smit (Young)

Exercise: Controlling Media

In this recitation, work individually to create a Processing sketch that controls media (images or video) by manipulating that media’s attributes using a physical controller made with Arduino. Reflect on this week’s classes to guide you in your coding process, and think about how you can incorporate interactivity and computation into this week’s exercise.

For this exercise I used an image of the famous Dutch painting “The Girl with the Pearl Earring” by Johannes Vermeer (picture source: wikipedia). I edited the image with the pixel function in Processing by drawing ellipses that would all together shape the complete image of The Girl with the Pearl Earring. I connected two potentiometers on Arduino (I ended up using just one) to alter the size of the circles in Processing with.

This was the result:



Code on Arduino

void setup() {

void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);



Code on Processing:

import processing.serial.*;

String myString = null;
Serial myPort;

int NUM_OF_VALUES = 2;
int[] sensorValues;

PImage img;

void setup() {
size(330, 355);
img = loadImage(“parel.jpg”);


void draw() {
for (int i=0; i<100; i++) {
//int size = int( random(1, 20) );
int size = int(map(sensorValues[0], 0, 1023, 1, 20));
int x = int( random(img.width) );
int y = int( random(img.height) );
color c = img.get(x, y);
ellipse(x, y, size, size);



void mousePressed() {


void setupSerial() {
myPort = new Serial(this, Serial.list()[ 10 ], 9600);

myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
myString = null;

sensorValues = new int[NUM_OF_VALUES];

void updateSerial() {
while (myPort.available() > 0) {
myString = myPort.readStringUntil( 10 ); // 10 = ‘\n’ Linefeed in ASCII
if (myString != null) {
String[] serialInArray = split(trim(myString), “,”);
if (serialInArray.length == NUM_OF_VALUES) {
for (int i=0; i<serialInArray.length; i++) {
sensorValues[i] = int(serialInArray[i]);

Use this week’s reading, Computer Vision for Artist and Designers, to inspire you to write a reflection about the ways technology was used in your project.

“Computer vision algorithms are increasingly used in interactive and other computer-based artworks to track people’s activities. Techniques exist which can create real-time reports about people’s identities, locations, gestural movements, facial expressions, gait characteristics, gaze directions, and other characteristics.”

“As computers and video hardware become more available, and software-authoring tools continue to improve, we can expect to see the use of computer vision techniques increasingly incorporated into media-art education, and into the creation of games, artworks and many other applications”

In my Processing exercise I have combined both traditional art and modern software techniques to make a (slightly) interactive form of media-art. I used potentiometers to do so, which is not as interactive as using sensors to implement hand-gestures for instance, but does allow the user to interact and thus communicate with the media-art in Processing.

Recitation 9 – Annabel Smit (Young)

*I uploaded this blogpost yesterday night but it apperently failed to upload

For Recitation 9: Serial Communication we had two exercises.

Exercise 1 Make a processing Etch a Sketch

For this exercise, use Arduino to send two analog values to Processing via serial communication. To do this, build a circuit with two potentiometers and write an Arduino sketch that reads their values and sends them serially. Then write a Processing sketch that draws an ellipse and reads those two analog values from Arduino. This sketch should modify the ellipse’s x and y values based on the input from Arduino.

It was quite difficult at first because I started coding on a blank sketch in Processing and Arduino. After that I tried to edit an example code we had in class from a quite similar excercise. But this only made me more confused instead. Then, after downloading the example files it was much easier, because we all that needed to be done was to edit certain values and make sure that all the values were correctly connected to sensor 1 and 2.

code for Arduino
// For sending multiple values from Arduino to Processing

void setup() {
}                                                                                                                                                                         void loop() {
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

Serial.print(“,”); // put comma between sensor values
Serial.println(); // add linefeed after sending the last sensor value

// too fast communication might cause some latency in Processing
// this delay resolves the issue.

I accidently deleted the file with the code for Processing but this was the end result:


Exercise 2: Make a musical instrument with Arduino 

Write a Processing sketch that sends values to your Arduino based on your mouse’s x and y positions and/or keyboard interactions. Then, make a circuit with your Arduino and a buzzer. The corresponding Arduino code should read the serial values from Processing and translate them into frequency and duration for a tone, which will be sounded by the buzzer.

Because the first assignment took me a bit longer and because I asked a teaching assistent for quite a few questions about the coding I didn’t have enough time to finish exercise 2 in time. I was able to connect the buzzer and start the code but I unfortunenately wasn’t able to complete it. I am planning to spend more time on the weekends to experiment with code focussing on our final project, and perhaps work more with a partner during the recitations.


The first exercise was not as interactive because it was quite a small exercise. The interaction it has, was the communication between me changing the values through the parameters, Arduino and the output of the moving circle on Processing.

The second exercise was definitely much more interactive because there is a much larger and variative communication going on by changing the frequency and duration of tones, and it has a much more entertaining function for the user as well.


Final Project Proposal – Annabel Smit (Young)

A. Project title
“Let’s change the game”

B. Project Statement of Purpose
We seek to challenge, especially children, to play and communicate with the art of technology. Instead of just handing children an iPad for them to play online game applications with, we want to combine the traditional and physical way of playing with the new ways of interacting with technology. We find it incredibly important to lay that connection between technology, art and entertainment.

C. Project Plan
Our project aims to entertain the younger public in particular. We want to create a certain game that combines moving objects, animation, on Processing with the interactive game part on Arduino. Our main idea is inspired by various raffle games, where one contestant can we win a prize if she or he obtains the lucky number, or in this case: the lucky shape. We want the game players to throw a small ball through one of each 4 or 5 different tubes, at the bottom of these tubes there is an assemblage of buttons, with each a different purpose. These buttons are connected onto the Arduino board, which is connected to Processing. With each different button, a different shape will fall into a box in Processing, the first shape to fall in, wins. All contestants will have a shape assigned to them by a given card and will throw the ball in a randomly chosen tube at the same time. The winner will thus win by pure luck. We will start by composing a code for Processing which will make it possible for the shapes to bounce within the borders of the frame and then to be able to stay in the box. Then, we will move onto composing the corresponding code on Arduino, and of course connecting the two. Throughout our project process we will most likely gain more inspiration to expand the game by adding more interactions, if possible.


D. Context and Significance
I have analyzed certain projects related to game-play. Especially games, such as Bingo and Lottery games, seemed very interesting for us to realize with coding on both Arduino and Processing. By adding certain ‘real life, tangible’ objects such as the balls, the tube and the buttons, the game enlarges its most important function: interaction. Interaction for me personally, stands for various self-explanatory interactions between either two devices, humans or between human and device. This interaction should either function as something helpful and useful, such as clinical interaction devices, or it should have a purpose to entertain, such as in a game-play device. Besides this, interaction opens and enlarges the conversation between humans and devices, because it’s not just important in the future, its already significant here, today. The communication, in the form of art for instance, or game-play, is definitely something that will be more experimented with.
Our project is based on a game that has existed for quite a long time and it’s exactly because of this fact, that we believe that it could be very interesting to convert this traditional game into something much more modern and digital.

Recitation 8 – Annabel Smit (Young)

  1. Go back to your definition of interaction as you defined it during the group project. How has your work in this course involved that definition?

During, and before, the group project, the word interaction on its own, for me personally, stood for one, two or more people exchanging a variety of reactions between each other. However, in context with an electronic or digital advice, I found that the word ‘interaction’ gains much more meaning. Because of the incredible developments in technology, we are able to create reactions not just between solely humans, but between humans and devices as well. The interaction now indicates a loop of reactions between a human as the input, and the computer as the output. By collecting and measuring our biological senses, the computer can process them and react to them, followed by us humans reacting to the output, and so on.

However, there is a very important distinction to make between a reaction and an interaction. A reaction solely is a single type of response from a computer to us humans and vice versa. Whereas an interaction indicates a various amount of reactions between humans and the computer. When we speak of an interaction, the computer is able to collect certain data by using either sensors or switches, or a combination of both, it will then process this data and compute it into an action, followed by another reaction, and so on.

My work in Interaction Lab has involved this definition during the research project for example, for which my group and I ‘created’ an interactive machine that would function as digital therapy. Here, the interaction took place between the device and the (human) user. Communication would be required between the two, by the biological inputs of the human body, the processing by the digital computer, and the outputs of a variety of calming/relaxing effects. And this wouldn’t just happen a single time, it would repeat itself, if necessary, throughout the day. Besides this, we have been experimenting with many different interactive circuits during the recitations when we began to add sensors. And now, with Processing, we are able to create interactive animations. For instance, with the KeyPressed and Mouse function, as well as the ArrayList function. We can, as it were, create interactive animation games digitally.

2. Based on the initial definition and your experience developing and executing your midterm project, how has your definition of interaction evolved? Be as specific as possible.

 My definition of interaction during my midterm project has yet been quite the same. The aspect that changed however, was that the interaction should almost entirely be discovered by the user itself by interacting with the device. Therefore, it is important to make the device itself as explanatory as possible, with the design, its purpose, and adding certain suggestive elements for instance. Besides that, it’s important that by experimenting with the device, something new should happen, supplementary element should appear. The user has to stay engaged, interested and entertained by the creation you have made, if it’s not for clinical use, of course.

3. Research two other interactive projects, one that aligns with your definition of interaction and one that differs from it.

The interactive project that aligns with my definition of reaction is, for instance, the Weather Thing – Real time climate sound controller by Adrien Kaeser. The Weather Thingy is a sounds controller that uses real time climate-related events, such as the wind, to control and modify the settings of musical instruments. The device has 3 climate sensors and processes the data it receives. The user can constrain the value received or amplify it at any time during the process. I personally really admire this project, because the creator used several elements to form it as a whole; he created a device by using technology, then he used climate to gain and process data, and lastly, he brought in humans as users to alter and use the data to create live music. It is extremely technological and futuristic, but at the same time its environmental and conventual. It aligns with my definition because I believe that its very important to use a variety of original elements, such as climate, as a data receiver to create an interaction. And to then create a communication between human, technology and environment (nature). Besides that, here, so many different reactions take place. With every different current of the wind the data changes, and thus the music changes. And the user can alter it to his or her own preferences (music taste).

A project that however clashes with my definition of interaction is the Face Trade – Art vending machine project by Matthias Dörfelt that trades mugshots for “free” portraits

For this project, Matthias Dörfelt created an Art Vending Machine where buyers would trade their mugshots for generated computer drawings, instead of paying with money as a form of payment ( These mugshots works be stored in the Ethereum Blockchain forever.                        As I have stated in an earlier blogpost, I find that what Dörfelt wanted to create is extremely clever and creative, however I believe that storing people’s faces into a public website goes a step too far in realizing his idea. “He wanted to capture the wide spectrum of things associated with blockchain which fluctuates between silk road, greed, gold rush mentality and the utopian promises of decentralization and democratization that we only got to see small glimpses of thus far” (Filip Visnjic). To me personally, exchanging a mugshot for an art piece in order to store that mugshot online forever is comparable to a hacker watching with you through the webcam on your laptop-screen, the only difference is that the person gave their consent. However, the person trading their mugshot does not know what could happen to it, and their identity as a whole, after their picture has been stored online in the Ethereum Blockchain. Thus, even though these people trade their mugshots voluntarily, they only volunteer to the extent of how much they know. In this project it is quite difficult to deduct the intentions of the artists to collect personal information. Because on the surface this project seems like a very creative and entertaining initiative, and perhaps truly meant for research, however we will never know if that was the artist’s honest intention. This project therefor, clashes with my definition, and more importantly; my definition of the purpose of creating interactive projects. Interactive projects, if not medically (clinically), should be created as a true full addition of art or entertainment. It should not be misused for other purposes, such as collecting private data of individuals. That is not an interaction, it is an invasion of privacy.

4. Write a “new” definition of interaction in your own words. Draw from your account above and try to evolve your definition beyond the current convention.

 For me interaction still mostly holds the same meaning as I had developed before the research and midterm project. Interaction is where either,two or more people, people and devices, or multiple devices, exchange a variety of reactions between each other. The interaction indicates a loop of reactions between a human as the input, and the computer as the output. A loop that can vary and change over time, during the different processes and data collections. By collecting and measuring our biological senses, computes can process and react to them, followed by us humans then reacting to the output, and vice versa. However, there is still an important distinction to make between reactions and interactions. A reaction is solely a single type of response from a computer to us humans and vice versa. Whereas an interaction indicates a various amount of reactions between humans and computers. Besides this, what I did discover is that the interaction should almost entirely be discovered by the user itself by interacting with the device. It is thus important to make the device itself as explanatory as possible, and the design is thus the key-element for this. Besides that, it’s important that something new and engaging should occur each time during the interaction by making the user use his body to interact with the device. This way, we enlarge the communication between humans and ‘robots’ significantly, which is incredibly important for our future.





Recitation 7 – Annabel Smit (Young)

During this recitation we had to code a moving animation. I chose to work with the Array and Bounce functions. With the Processing website and other information websites I was able to build the following animation:

CLICK — bouncing-balls

void setup() {
size(600, 600);
ArrayList<ball> balls = new ArrayList<ball>();
float gravity = 0.1;
float resistance = 1;
int mX=300, mY=300;
void draw() {
int i;
for (i = 0; i<balls.size(); i++) {
ball b = balls.get(i);
text(“Ball goes from green (left click) to red (right click)”, 15, 15);
line(mX, mY, mouseX, mouseY);
fill(color(0, 255, 0));
ellipse(mX, mY, 5, 5);
fill(color(255, 0, 0));
ellipse(mouseX, mouseY, 5, 5);
void mousePressed() {
ball b;
if (mouseButton == LEFT) {
} else if (mouseButton == RIGHT) {
// Add new ball
b = new ball(mX,mY,10, 0.1*(mouseX-mX), 0.1*(mouseY-mY));

class ball {
float x, y;
int r;
float dx, dy;
ball(int _x, int _y, int _r, float _dx, float _dy) {
x = _x;
y = _y;
r = _r;
dx = _dx;
dy = _dy;
void update() {
if ((y+r)>height) {dy = -dy; y = y – 1; }
if ((y-r)<0) dy = -dy;
if ((x+r)>width) dx = -dx;
if ((x-r)<0) dx = -dx;
ellipse(x, y, 2*r, 2*r);

Then for the homework I made the following code:

int posX = 250;
int posY = 250;
int hue = 0;
int rad = 150;
boolean Ellipse = true;
void setup() {
size (600, 600);
colorMode(HSB, 100);
void draw () {
background (#FFFFFF);
ellipseMode (CENTER);
strokeWeight (25);
stroke (hue, 150, 150);
if (hue > 100) {
hue = 0;
ellipse (posX, posY, rad, rad);
if (Ellipse) {
if (rad == 100) {
Ellipse = false;
} else {
rad ++;
if (rad == 200) {
Ellipse = true;
if (keyCode == LEFT) {
posX = posX – 1;
if (keyCode == RIGHT) {
posX = posX + 1;

CLICK — moving-circle-video

It can be quite difficult to find the right functions and codewords to make certain actions happen, it’s quite timeconsuming as well. Nevertheless watching the end result is really fun, and there is, of course,  still a lot to learn.

Midterm Project – Annabel Smit (Young)

The Haunted House

At the beginning of our midterm project, my partner and I were brainstorming on many possible ideas to create products that could actually help people in their daily life. Most of these ideas were quite complicated and complex, and they didn’t seem possible to technologically realize in such a short timeframe. We were meeting up for our project in the café on the second floor one day, and while we were talking about all these possible ideas, my partner focused on the Halloween decoration in the cafeteria and suggested something totally different that we had thought of before: a haunted house. With Halloween coming up we both agreed that this could be a very fun project to be experimenting with. We knew that the target public would be quite broad, and even more; it would be something that our fellow classmates, people of our own age, would find entertaining. Many theme parks have had haunted houses from almost the very beginning, it’s often one of the oldest attractions build and nevertheless people, of any age, still evidentially enjoy the experience of purposely getting scared, shocked, and entertained. It is also one of the most prominent desires of human beings to gain adrenaline by experimenting with fear and the unexpected. For instance, by doing intense sports, climbing high mountains, freefalling, skydiving, cliff jumping, riding rollercoasters, watching frightening movies and in our case: visiting the ‘supernatural’ in a haunted house. Our haunted house however, is slightly different, we want to stimulate people to use their whole bodies to interact and engage with an electronical device and to engage with it. The purpose it to increase the communication and interaction between human and non-human. There are numerous reactions taking place between the user and the variety of functions of the developed device. The reactions of the device are unexpected instead of obvious and we need the users to use their whole bodies to gain them.

One of the readings was incredibly interesting regarding the design of an interactive project.Making Interactive Art: Set the Stage, Then Shut Up and Listen. In which Tigoe says: “The thing you build, whether it’s a device or a whole environment, is just the beginning of a conversation with the people who experience your work. What you’re making is an instrument or an environment (or both) in which or with which you want your audience to take action. Ideally, they will understand what you’re expressing through that experience.” The reading basically addresses that a project should be designed in such a way that the user alone is able to figure out how it functions by interacting with it. There shouldn’t be anything left to say by the creator. We used this in our project by trying to make the haunted house self-explanatory. It definitely helped that almost all of us are familiar with the concept. However, we added a few elements that would give the user more definition. For instance, we added the slogan “I dare you to come a little closer” and images of skeleton hands near the ultrasonic rangers, to encourage the user to decrease its distance with the haunted house and use their hands in front of the sensors to discover what awaits them. The user testing session definitely helped us with enhancing our current suggestions to make it less obvious yet present.

The first haunted house was built in the United States during the Great-Depression and opened in 1969 in Disneyland. There have been ghost houses, but never an actual ‘haunted house.’ In a way, Mr. Walt Disney is one of the inspiring artists of our project. Before his famous theme park, Disney developed film-animations because he wanted to create something new and modern for the public, he wanted to entertain them with his own creations. Something we want to as well.

One other inspiring project was the PomPom Mirror created by Daniel Rozin, who, in his design used the movement of the human body as the signaler for his mirror to produce its reactions by following the movement of the body. This artwork is a collaboration between humans and devices, which is something important and present in the future, since this collaboration is developing incredibly fast. Besides this, the use of the human body is an important element here as well.

Another interesting reading for us was Introduction to Physical Computing by Igoe and O’sullivan. In which they talk about the senses seen by the computer, even though his portrait of the human senses seen by the computer is incomplete, missing a mouth, it’s very significant. Our device focusses mainly on the movement of the person increasing and decreasing his distance, in order to activate the movement of the mask, the lights to turn on, and the tones to play. We in return use our vision and listening capabilities to observe these reactions in order to further respond to them.

There were also some videos of interactive designs that were both entertaining yet medically useful, such as the video shown in class of the guys that created an alternative way to physical graffiti by developing a mental form. Which was used by someone who became paralyzed and could now only use his eyes to move. This design actually makes an impact on the life of the user, which is something that would be beautiful to design for our midterm project, however something like this, at this stage, is very difficult to realize.

We thus created our haunted house instead, by adding several elements. First, we thought of adding a reaction with red/yellow LED lights to engage with the user, and to create this Halloween, spooky atmosphere. We connected each of the ultrasonic rangers to a sperate breadboard to enable the possibility of different light reactions with different movements by the user. We added the second element by uploading a code of the tones of O’lantern by Beethoven, a spooky song. Before our user-testing session we didn’t have our third reaction yet, the moving skeleton mask. By adding a stepping motor beneath the mask, we were able to make it move slightly when one would come closer. In an actual haunted house, machines would be timed and programmed to make certain movements, they don’t engage with the public in order to activate these different functions and reactions. There is no communication between the device and its user. However, in our mini haunted house there is, maybe ever so slightly, but there definitely is. The user is able to engage with it and respond to it – as it where, to communicate with it. Which is something we really observed in the user-testing session.

The immediate benefits of our project would of course be: entertainment and an adrenaline boost. However, the long-term benefits, value and effects would be the stimulation of interaction and communication between humans and devices, not just in purpose of enhancing peoples’ life, fixing world problems, but also on smaller scale in the form of entertainment. It could be a great addition to the current attractions to build something more interactive, in order to decrease the distinction made between humans and robots. We expected with our project that people would become curious, excited, and slightly scared by our project, especially if placed in a different setting, in a dark hallway for example. Our haunted house is meant for people of all ages. For everyone who likes the rush of adrenaline that comes from fear, and for people that are curious and intrigued by interacting and communicating with technology in various ways.

We used quite simple material for the outer part of our haunted house: a carboard box. However, we searched for fall and Halloween elements to creatively put together something ‘sophisticated’ yet scary. We 3D printed the smaller pumpkin and added a bigger one. We would have liked to make a box with the laser cutting machine instead, if we had enough time. Our user testing session, as said before, influenced the design of our suggestive elements, such as the hands and the slogan, but also the addition of the 3rdreaction, the moving mask. People were entertained, but not yet frightened, which is why we chose to add a third element. We, of course, would have liked to have time to add more. Besides this, this a very small version of the haunted houses in theme parks, where everything is in human-size and extremely well designed, with a bigger budget and more time, of course.

Our goal was thus to create a haunted house with a variety of reactions between the user and our project. One the one hand, to entertain and frighten the user, and on the other hand, to stimulate the interaction between humans and robots (technological devices) with the use of our whole bodies. In my opinion, on a smaller scale, we succeeded in doing so. We entertained our users, perhaps frightened them (if in a different setting) and most importantly, we realized and increased the communication between humans and devices. It would even more so with more time, by adding more functions, and with more use of the fabrication opportunities. We learned to successfully design a 3D model after failing the first time, and we succeeded in coding multiple reactions onto our project. This midterm project has taught us a lot and gave us more definition on what interaction truly means, and why this is so important now and will continue to be in the future. Besides all other fields, the art and entertainment industry can really demonstrate the future’s more interactive collaboration between technology and humans.

Recitation 6: Processing Basics – Annabel Smit (Young)

For this recitation I chose an image with the famous motif of Piet Mondriaan, a Dutch painter dated from the 19th/20th century.  My family and I, being born and raised in The Netherlands, have always admired his simple, yet beautiful art-work. By using very basic shapes and colors he created something incredibly unique,  yet simple and minimalistic. Even though Mondriaan created his designs almost a century ago, its’ shapes and colors are very modern and simplistic for its’ time, and even still today. However, today we would most likely digitally design this pattern on either illustrator or Processing 3.4, for instance, instead of by hand with the use of a paintbrush. 

My creation on Processing 3.4


void setup () {
size (660, 600);
background (255);


void draw () {

rect(200, 60, 10, 650);
rect(320, 60, 10, 650);
fill (0);
rect(10, 60, 600, 10);
fill (0);
rect(10, 400, 600, 10);
fill (0);
rect(320, 310, 600, 10);
fill (0);
rect(610, 60, 10, 650);
fill (0);
rect(200, 60, 10, 650);
fill (0);
rect (250, 400, 10, 650);
fill (0);
rect(200, 150, 600, 10);
fill (0);
rect(200, 310, 600, 10);
fill (0);
rect (520, 60, 10, 90);
fill (0);

color c = color(255, 204, 0); // Define color ‘c’
fill(c); // Use color variable ‘c’ as fill color
rect(11, 59, 188, 340); // Draw rectangle
color d = color(0); // Define color ‘d’
fill(d); // Use color variable ‘c’ as fill color
rect(320, 310, 290, 90); // Draw rectangle
color e = color(200, 0, 0); // Define color ‘e’
fill(e); // Use color variable ‘c’ as fill color
rect(330, 406, 280, 200); // Draw rectangle
color f = color(50, 55, 100); // Define color ‘f’
fill(f); // Use color variable ‘c’ as fill color
rect(10, 410, 240, 200); // Draw rectangle
color g = color(50, 55, 100); // Define color ‘g’
fill(g); // Use color variable ‘c’ as fill color
rect(530, 60, 140, 90); // Draw rectangle

As of right now, I think that for me personally it’s easier to use photoshop and illustrator instead of processing 3.4, however perhaps if I would have a lot of experience with code, and were to be able to memorize all the shapes, colors, and forms by code it would actually be usefull. Nevertheless, right now, I prefer working with Photoshop and Illustrator, at least for non-animation artwork.   

Recitation week 5 – Annabel Smit (Young)

For this week’s recitation, we had to imagine that we would be presenting our research project device to an investor. We had to design a poster that advertises our interactive device and communicates it well.

Our device for our research project is called the Therapy Box. As explained in my previous blogpost it is an interactive device that helps people find an escape from their rushed daily life whenever they feel the absolute need to do so.

For my design I have used both photoshop and illustrator. I searched for a portrait of a ‘frustrated’ and ‘stressed’ looking person online. For my editing I began in photoshop. I first used the filter stained glass to create color coordinated texture effect. After uploading this filter, I connected the different color patterns together to form more of an abstract image, solely shaped by colors and not necessarily by lines. I purposely left some part of the design in the stained glass filter because this represents the part of technology, straight and logic lines, instead of the drawing of colors. I then uploaded this completed file into illustrator where I added a picture of the skyline of New York, which represents the rushed and busy lifestyle the consumer is presumably in right now. I kept the colors simple to represent the negative and unhappy thoughts of someone in an unpleasant mind-state. Which is exactly what our device offers an escape from. 


Picture sources: photo-1508186225823-0963cf9ab0de.jpeg


Recitation 4 – Drawing Machine (Young) – Annabel Smit

Materials needed:

For Steps 1 and 2

▪1 * 42STH33-0404AC stepper motor                                                                                                             ▪1 * SN754410NE ic chip                                                                                                                                   ▪1 * power jack                                                                                                                                                ▪1 * 12 VDC power supply                                                                                                                               ▪1 * Arduino kit and its contents

For Step 3

▪2 * Laser-cut short arms                                                                                                                                ▪2 * Laser-cut long arms                                                                                                                             ▪1* Laser-cut motor holder                                                                                                                            ▪2 * 3D printed motor coupling                                                                                                                      ▪5 * Paper Fasteners                                                                                                                                     ▪1 * Pen that fits the laser-cut mechanisms                                                                                             ▪Paper

Step 1: Build the circuit

The circuit for this week’s recitation was a bit more complicated than the ones we build in the previous weeks. At least, if we try to count the amount of wires that were needed in order to connect everything correctly. However, while looking at the diagram of the circuit I succeeded in connecting the large number of wires onto the bread-, and Arduino-board correctly. After uploading the code onto the Arduino, the motor rotated smoothly.

Step 2: Control rotation with a potentiometer 

For step two we had to add a potentiometer to the circuit to allow for analog input. After that, I uploaded the MotorKnob code onto the board, and altered the steps within the code to 200. Then, I added the map() function by using the example code from Arduino and pasting it below void loop(), just before the last bracket (}). I mistakenly put the map() function at the top of the void loop() at first, however this didn’t seem to work — Arduino would show that an ‘error’ had been made. After adding map() it correctly, the motor itself would not rotate if everything was left untouched. By twisting the potentiometer left and right; the motor would start to rotate as before, either turning one way or the other. 

Step 3: Build a Drawing Machine!

Together with one of my classmates we were able to move onto step 3 quite quickly, luckily. After collecting the materials needed for building the drawing machine, we put our motors together to creatively construct the collaborative machine. I wouldn’t exactly call the outcome of our project a masterpiece, however, we had a lot of fun in ‘drawing’ it. By twisting and turning the potentiometer, and shifting the paper from its place below the marker pen, we were able to collect a number of abstract lines and shapes, and rewarded with taping our ‘masterpiece’ on the IMA studio window. 


Question 1:

I would be interested in building a machine that is connected to my thoughts when in the morning I stand in front of my closet. The machine would be connected to my brain and to the hangers with the different pieces of clothing in my closet. Together with my thoughts and the machine I would be able to find specific clothing and to combine outfits a lot faster. I would for instance picture a certain skirt with a certain blouse together, and the machine would send this thought into collecting the correct items together so that I can see beforehand how it would look as one outfit. This would safe me a lot of time in morning for sure…

Besides this, speaking more of the digital manipulation of art, I would love to be able to build another machine that is connected to my thoughts. However, this time, I would connect my thinking process to a paintbrush, a paintbrush that would be able to connect my thought to actual motion onto a canvas, in order enable even people that have either difficulty in moving or holding their hand still etc. to create something.

Question 2:

Andy Gracie created an interesting project called the Fish, Plant, Rack, in which a robot is directed by its evolving interpretation of electrical discharges from a virtually blind fish in a separate aquarium, which observes the plants on video, in order for the robot to interact with the plants. The robot must have sensors and a motor that can convert any given energy into motion. I’m not quite sure how this machine exactly works, at least not the part that involves a certain interaction with a ‘virtually blind elephant fish.’ I’m guessing that the fish releases energy by its swimming motion, energy that is then transmitted to the robot that turns that energy into another movement — moving in between the plants to give them nutrition. We haven’t yet worked with a motor like this exactly yet, but last recitation, the motor used for the drawing machine, came quite close. The difference is however, that in this machine the robot only moves in one straight line, which the again, could be programmed, of course.