IxLab: Final Project!

Partner: Jessica Chon

Project Title: Divine Intervention

Inspiration & Ideation

This project, in its earliest form, was a continuation of our midterm, the Kill Me Not Flower. The user feedback we received during the midterm motivated us to continue developing that project. One of the main feedbacks we focused on from the midterm was that many users tended to focus on only one aspect of our project (either the light or the water) because the components were spread out too much and not well integrated. Another lesson we kept in mind from the midterm was how we ran into problems the night before the in-class presentation because of my servo motor.

 

Concept Development

Moving forward from the midterm, Jessica and I talked to each other a lot about possible ideas for the final. We both agreed that we wanted to incorporate some new elements of Processing we had learned, namely sound, to add more interactivity to our project. At first, we toyed with the idea of having a physical butterfly that contained a microphone so that if the user blew on it, the butterfly would actually fly away. We realized from talking to one of the fellows that butterflies are actually very pretty and if the user saw one, they probably wouldn’t think to blow it away from the flower. So, we decided to make it a fly instead. Jessica did some research and found out that the Whitefly is a common pest to indoor plants. Further considering the potential applications of the microphone as an input, Jessica came up with the idea to have flies appear on the sky/window display such that they gradually grew in number until being blown away by the user.

As for my own development of the sky/window display, I was focused on two specific pieces of feedback from the midterm. One of them was for there to be some kind of base state that would always be there. The second was for the sky display to change not just from the user shining a flashlight on it, but also from the user covering up the flower. To be clear, this also occurred in the midterm but it was done with lerping and simply changed the brightness of the screen.

 

Coding & Building

Processing

When I first started working on the Processing sketch for the sky display, I thought I should make the clouds more realistic looking, so I started swapping out different cloud png’s and seeing how they looked. However, during the concept presentation in class, people said that they had liked the animated/cartoon-ish look of the original sketch. Because of this, I went back to the original cloud image that I used for the midterm. As for the sun, I decided to change to a different image of a sun that I found online. In the original midterm code, the sun would spin as a form of feedback for the user so they knew that shining the light on the plant was having some effect. However, some people gave feedback that this spinning didn’t really make sense in terms of the effect that light would have on a plant. As such, I decided to eliminate the spinning animation.

One feedback I got from Rudi during a recitation was that the various components of the project were still too disjointed. He pointed out to me that when a plant gets too much light, it’s a bad thing, and that I should reflect that somehow on the screen. He also said that it would be better if when the plant was watered, something happened on the sky screen as well. As a result, I ended up moving away from the clouds I had previously been focusing on and found images on google of a bright sunny sky, a nighttime sky, and a desert. The idea was to set light thresholds so that when it was around the normal amount of light usually in the room, the sky would be bright and sunny. If the user shines too much light on the plant’s built-in LDR sensor, the sky turns to a dessert. To integrate the water, I decided that once the plant was watered, it would cause rain on the screen. I got help from this YouTube video with the rain animation code: https://www.youtube.com/watch?v=Yg3HWVqskTQ&t=693s . I looked at several videos and code websites but ultimately this video was the most helpful to me.

Once I integrated Jessica’s code for the flies into the sky, I needed to figure out a way to add a delay so that time could pass before the flies would start to reappear again. Our worry was that if the flies immediately reappeared, the user would just keep blowing them away to get rid of them and never explore other parts of the interaction. I had the idea to do this using a boolean and start counting frames from the time when the microphone input volume exceeded the threshold set. Luis worked with me to figure out how to set the conditions so this would result in the delay I hoped for.

Arduino

Coding for the LDR sensor was easy because I already had experience working with the LDR for my midterm project. As for the water sensor, Jessica helped me with the code for that since she had worked with a water sensor for the midterm. I had to adjust the code a little bit in terms of naming variables and sending the water value to my Processing sketch with serial comm.

At first, I thought I was going to work with a ultrasonic range sensor for my project, so I spent some time figuring out how the coding for it worked. I was pretty confused at first, because I based my code off of the example on the Arduino website and didn’t really understand how each line of code was functioning. The purpose of using the distance sensor was to know whether or not there was a user standing in front of the project, so that some light changes alone wouldn’t set off the animation I had originally planned to create. However, during class when I asked Antonius a question about my coding, he pointed out to me that I was overcomplicating the process quite a bit. In fact, as it turned out I could accomplish what I wanted to without using the distance sensor at all! This made my life a lot easier.

Serial Communication

The serial communication was pretty straightforward for this project. I needed some help getting the Arduino code working for the sending of multiple values to my Processing sketch. Nicholas showed me I was writing println() when I should have just been writing print(), and this affected how Processing was receiving the values. Once I fixed this, my serial communication worked fine.

Building & Setup

For the flower petals, we kept the flower from our midterm project. I bought the flower pot off of Taobao and we cut a hold in the bottom of it to run our USB cables through to our computer. The stem of the flower is new for this iteration and is a bubble tea straw covered in polymer clay. The dirt for the flower is made of brown polymer clay. Two boxes were used to create the height difference we needed so that both screens could be visible.


                 

User Testing

We conducted the User Testing on our own for this project as was required. The first test was a bit of a fail, as when I detached and reattached my computer’s display screen, the Arduino stopped communicating with my computer and none of the animations of the sky were happening. Luckily, our user tester was very friendly and patient. He was confused about which part of the project he should interact with first, as were our second and third user testers. This made us realize that we needed to write instructions for our users. Due to time constraints, we didn’t do this before the final presentation. As a result, many of our peers wrote on the comments sheet that they thought our project definitely needed instructions.

In addition to this, one of our user testers remarked that even though he knew to blow on the microphone, it probably wouldn’t be intuitive to other users to blow toward the microphone instead of directly at the screen where the flies were appearing. For the IMA show, we made a small sign pointing toward where the user should blow. In the future, it would be great to have this sensor on the flower so that the user’s most instinctual behavior resulted in the flies going away.

The last feedback that I want to note is that users did not recognize our moisture sensors as roots, and they remarked that they wished the water could go directly into the “dirt”. If we were working with a plant that would be used with much longer time between each use, I think we could have tried out having some real dirt or a more realistic simulation of dirt that the user could pour water onto. In this kind of situation, the dirt (real or fake) would have ample time to dry, thus letting the sensor reading return to zero. However, as this was a version that needed to be tested over and over again (and used over and over within a short time span at the IMA show), it didn’t make sense to put the moisture sensors somewhere that we couldn’t easily access them and dry them off. For the record, Jessica used her knowledge of dyes to try and dye our sponges brown so that when we wrapped them around the moisture sensors, they could look more like roots. We ended up with an orange color that we simply did not have time to fix because of other things that needed to be tended to.

 

IMA Show

Setting up for the IMA show 🙂

Even though it was after the presentation of our project, the IMA show was the place where we got the most user feedback. It was nice to see the looks of amazement on some little kids faces when they blew the flies away or made the sky turn to nighttime. One challenge we did not anticipate was the decent number of people coming to the show who only spoke Chinese. I was able to communicate with them and through trial and error I figured out how to best articulate the purpose of our project using Chinese. I feel like this experience really tested my ability to clearly and concisely explain what our project was about, since not only was I talking to a total stranger, but I was forced to communicate in a simpler way because I do not know any vocabulary about circuits or interactive technology in Chinese. Also, blowing on the flies got a little bit tricky during the show. Because the input value for this is received from the microphone in my computer and the room got quite noisy, I think that’s why sometimes it would randomly start raining. This could have been fixed by taking some time to adjust the thresholds for the volume value that makes the flies disappear. However, I was reluctant to make changes to my code in the middle of the show since it was working for the most part.

 

Final Thoughts

After having the chance to present at the IMA show, I am still left with many ways I would like to improve our project if we were to make more iterations. I would definitely try and integrate the microphone into the flower so that the user could blow toward the flower to get the flies to go away. An interesting thing I observed during the IMA show was that even though we had instructions, not many readers read them. So, I would figure out a more obvious place to put the instructions, or I would build them into the Processing sketch so that they were embedded within what the user was interacting with. Also, I would like to create a Chinese version of the instructions so that even more people could enjoy the interaction.

Overall, the looks of surprise and happiness on the users’ faces at the IMA show made all the efforts from the semester feel even more worth it, and I am left with an extra component to my personal definition of interaction. Now, more than before, it’s more clear and obvious to me how important the user’s feeling is during the moment when they first interact with something. Of course this varies depending on the interaction– with something like a computer keyboard, we (or at least I) don’t have any consistent feelings on a conscious level about the feedback I get from the computer– that is, when I press “A” and the “A” shows up on the screen, I am not totally amazed and excited and awestruck. If this weren’t to happen, I would get frustrated and angry. However, when moving beyond keyboard and mouse for interacting with digital media, I can tell it is more likely the user will be surprised when finding out about some new behavior they can use to get some response from the digital media, even if it is a simple response, such as the screen going dark or some flies disappearing. In the end, I am excited to see in the future what kinds of human behaviors could eventually become as second-nature to us for interacting with media as a button press is now. Thanks very much to Antonius and all of the fellows (especially Nicholas and Luis– whom I bothered most frequently) for guiding me through the learning process this semester and helping me gain this kind of appreciation for interactive media.

 

//Arduino


int sensor1 = 0; //connect light sensor to A0
int sensor2 = 1;
int val1;
int val2;
int moisturePower = 7;



void setup() {
 // put your setup code here, to run once:
 Serial.begin(9600);

pinMode(moisturePower, INPUT);
 digitalWrite(moisturePower, LOW);



}

void loop() {
 // put your main code here, to run repeatedly:
 val1 = analogRead(sensor1) / 4;
 val2 = analogRead(sensor2) / 4;
 Serial.print(val1);
 Serial.print(",");
 Serial.print(val2);
 Serial.println();
 delay(100);
}
//
//int readMoisture() {
// digitalWrite(moisturePower, HIGH);
// delay(10);
// val2 = analogRead(sensor2);
// digitalWrite(moisturePower, LOW);
// return val2;
//}

 


			
import processing.serial.*;

ArrayList<Fly> manyFlies;

import processing.sound.*;

AudioIn input;
Amplitude amp;

PImage photo, sunny, night, desert;

Serial myPort; 
String myString = null;
//int val1;
PImage Sunimg;
PImage Cloudimg;

int tooBright, tooDark;

Drops d[];

boolean flygone = false;
int gonetime;

int SUNSIZE = 679;
boolean growing = false;
int growtime;

boolean using = false;
int usetime;

int NUM_OF_VALUES = 2;
int[] sensorValues;

boolean cloudparting = false;
int parttime;
int timer = 0;

float c;
float g;

void setup() {
  size(displayWidth, displayHeight);
   setupSerial();
  d=new Drops[6000];
  for (int k=0; k<6000; k++) {
    d[k] = new Drops();
  }
   sensorValues[1] = 1;
  c = 0;
  //loading fly image
  photo = new PImage();
  photo = loadImage("fly.png");
  photo.resize(0, 100);
  sunny = new PImage();
  sunny = loadImage("sunny.jpg");
  night = new PImage();
  night = loadImage("night.jpg");
  desert = new PImage();
  desert = loadImage("desert.jpg");


  manyFlies = new ArrayList();
  manyFlies.add(new Fly());
  manyFlies.add(new Fly());
  manyFlies.add(new Fly());
  manyFlies.add(new Fly());
  manyFlies.add(new Fly());
  manyFlies.add(new Fly()); 

  //audio setup
  input = new AudioIn(this, 0);
  input.start();

  amp = new Amplitude(this);
  amp.input(input);
}

void draw() { 
  updateSerial();
  printArray(sensorValues);
  //make images smaller than 3000 x 2000

  if (sensorValues[0] <= 200 && sensorValues[0] > 70) {
    image(sunny, 0, 0);
  } else if (sensorValues[0] > 200) {
    image(desert, 0, 0);
  } else if (sensorValues[0] <= 70 && sensorValues[0] > 0) {
    image(night, 0, 0);
  }

  if (sensorValues [1] >= 180) { 
    //INSERT JESSICA CODE HERE
    fill(220, 220);
    rect(0, 0, displayWidth, displayHeight);
    for (int i=0; i<6000; i++) {
      d[i].display(); 
      if (d[i].ydrop>height) {
        d[i] = new Drops();
      }
    }
  }

  windowPane();

  float volume = amp.analyze()*100;
  //println(volume);*/
  //loops through every fly of manyFlies
  for (int i = 0; i < manyFlies.size(); i++) {

    //gets one of the many flies and draw it and move it which is called from other tab "Fly"
    Fly oneOfManyFlies = manyFlies.get(i);
    oneOfManyFlies.showFly();
    oneOfManyFlies.moveFly();
  }
  //print(frameCount);
  //print(":");
  //println(gonetime);
  //depending on random, lets get more flies
  if (gonetime < frameCount && timer == 1) {
    println("here");
    //if (random(1)<0.2) { //&&  frameCount > gonetime+6000
    manyFlies.add(new Fly());

    //}
  }

  //if you blow, flies will go away
  if (volume >= 30) {
    //loop previous loop backwards
    if (!flygone) {
      flygone = true;
      gonetime = frameCount +200;
      timer=1;
    }
    for (int i = manyFlies.size()-1; i >= 0; i--) {
      //remove flies
      manyFlies.remove(i);
      flygone = false;
      volume = 0;
    }
  }


  //println(volume);
}


void setupSerial() {
  //printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[0], 9600);

  myPort.clear();
  // Throw out the first reading,
  // in case we started reading in the middle of a string from the sender.
  myString = myPort.readStringUntil( 10 );  // 10 = 'n'  Linefeed in ASCII
  myString = null;

  sensorValues = new int[NUM_OF_VALUES];
}

void updateSerial() {
  while (myPort.available() > 0) {
    myString = myPort.readStringUntil( 10 ); // 10 = 'n'  Linefeed in ASCII
    if (myString != null) {
      String[] serialInArray = split(trim(myString), ",");
      if (serialInArray.length == NUM_OF_VALUES) {
        for (int i=0; i<serialInArray.length; i++) {
          sensorValues[i] = int(serialInArray[i]);
        }
      }
    }
  }
}

//rain 
class Drops {

  float xdrop, ydrop, speed;
  color q;
  Drops() {
    xdrop = random(width);
    ydrop = random(-1000, 0);
    speed = random(5, 10);
    q = color(255,255,255);
  }
  
  void update(){
   ydrop += speed; 
  }
  
  void display() {
   fill(q);
   noStroke();
   rect(xdrop, ydrop, 2, 15);
   update();
  }
}

//window pane
void windowPane() {
  stroke(0);
  fill(0);
  rect((width/2)-30, 0, 60, displayHeight); //vertical pane
  stroke(0);
 fill(0);
  rect(0, (height/2)-30, displayWidth, 60); //horizontal pane
   fill(0);
  rect(0, 0, 90, displayHeight); //left-most pane
   fill(0);
  //fill(255, 255, 255); 
  noStroke();
  rect(displayWidth-90, 0, 90, displayHeight); //right-most pane
  rect(0, 0, displayWidth, 70); //top pane
  fill(0);
  noStroke();
  rect(0, displayHeight-70, displayWidth, 70); //bottom pane
}

//flies


class Fly {
  //initial variables for fly
  int x;
  int y;
  float r;


  //constructor makes my fly
  Fly() {
    x = width/2;
    y = height/2;
    
    //used to make fly rotate random positions
    r = random(0, 2*PI);
  }


  //showing fly
  void showFly() {
    //image(photo, x, y);
    
    pushMatrix();
    translate(x, y); 
    rotate(r);
    image(photo, 0, 0);
    popMatrix();
  }

  void moveFly() {
    //move fly
    x = x + floor(random(-40, 40));
    y = y + floor(random(-40, 40));
  }
}

IxLab Recitation 11: Media Controller

Partner: Jessica Chon

*We got instructions on how to use the vibration sensor (including circuitry and coding) from this Arduino tutorial. *

The song in the video is Alien by BOBBY, a super awesome Korean-American rapper from the group iKON.
Materials:
  • Breadboard
  • Arduino and USB cable
  • Vibration sensor
  • Jumper cables
  • 1 Megohm resistor

We decided to work with an audio file in Processing and a piezo vibration sensor. Our idea was to use the vibration sensor to detect whether someone was singing or not (from the vibration on their throat) and send that value to Processing. Our goal was for Processing to receive the piezo value and use it to control the volume of the audio. I thought this kind of interactive media might be suitable for people who get shy at KTV and don’t sing very loudly into the microphone even when the song that they put in the queue that nobody else there knows comes on, thus killing the great vibe that was just established after everyone jammed to Waka Waka by Shakira. Effectively, quiet singing would result in the track volume getting so low that you could be easily heard, thus forcing the shy KTV singer to sing their heart out in order to raise the volume of the track.

At first, we found that the piezo values were finicky, so we got some help from Nicholas, who informed us that we were connecting our circuit incorrectly. Once we resolved this issue, the serial monitor on Arduino displayed values more in line with what we were expecting.

Wrong circuit. Sad.

It was difficult choosing a threshold that was suitable to the experience we wanted to create (which was requiring the user to be singing at a decent volume in order for the audio file’s volume to raise). The more we worked with the vibration sensor, the more we realized how sensitive it was, and that it’s better suited for what the tutorial code describes: a knock. Singing loudly near the sensor or putting it on our throat was not leading to a change in sensor values that was consistent enough to send those values to Processing and get a similar result every time.

Proper circuit. Yay.

Given how late in the recitation we realized this, we decided to stick with this sensor. To add a visual component, we generated an ellipse whose radius depended on the value being sent from the piezo sensor.  At this point, we eliminated the audio file element from Processing because we needed to use an external speaker. The implication of this is that the last iteration of what we were working on (and the one shown in the video) does not include a media controller like what was outlined in the recitation instructions. This is due to our lack of time and realizing that to accomplish what we wanted to do, the sensor we had chosen was not going to get the job done.

If we continued iterating, I think we would first look into getting a sensor more appropriate for what we wanted to accomplish. I think the biggest takeaway I got from this recitation was realizing how many different interactive media experiences could be created with just Arduino-compatible sensors and a Processing sketch. As a psychology major, I am really interested in peoples’ expression of emotions and the way facial expressions of emotion vary across cultures. In my Cultures of Psychology class, we have talked about how the meaning of a smile can depend so heavily on different aspects of culture. I think it would be great to make an interactive display that could detect someone’s facial expression and project them onto a screen with an environment around them that changed based on their facial expression and a previously chosen culture (via a button or something) to allow people to immersively experience the wide spectrum of meanings a facial expression can contain based on cultural context. For example, a smile might indicate friendliness in one culture, which would then lead to a display that evoked this idea of friendliness, whereas that same smile in another culture might indicate extreme nervousness or discomfort, and so the display would change to evoke a very nervous or uncomfortable feeling.

 

CODE:

Processing:
import processing.sound.*;
import processing.serial.*;

Serial myPort;
int piezo;

//SoundFile file;

void setup() {
  size(500, 500);


  printArray(Serial.list());
  piezo = 1;
  myPort = new Serial(this, "COM7", 9600);

}


void draw() {
  while (myPort.available() >0) {
    piezo = myPort.read();
  }
  background(255);

  if (piezo > 20) {
    fill(piezo, 0, 255);
    ellipse(width/2, height/2, piezo, piezo*2);
  
  }
}
ARDUINO
 
int piezo;
int sensor = 0;

// these variables will change:
int sensorReading = 0;      // variable to store the value read from the sensor pin

void setup() {
  Serial.begin(9600);       // use the serial port
}

void loop() {
  piezo = analogRead(sensor) / 4;
  Serial.write(piezo);

  // read the sensor and store it in the variable sensorReading:


  delay(100);  // delay to avoid overloading the serial port buffer
}

IxLab Recitation 9: 3D printing :D

Partner: Jessica Chon

During this recitation, Jessica and I worked with TinkerCad to create a model for 3D printing. We were particularly interested in 3D printing because we want to use it for our final project to create enclosures that make things like servo motors less conspicuous. A little servo motor is lovely, and you can say it has its own aesthetic, but that aesthetic does not fit with the one we are hoping to achieve with our new and improved flower. Additionally, because our project involves a moisture sensor which we intend for our users to be putting water on, perhaps we can used 3D printing to create some enclosure for the wires below our plant so that the water can be safely diverted away from the wires and Arduino boards, thus avoiding any safety issues. Safety first!

TinkerCad is interesting to use, but also sometimes frustrating. I will note that while Jessica and I both worked on a design for printing, in the end, we used Jessica’s design. Still, I will show screenshots of my original design here.

The object we wanted to create was a cube with holes inside of it, structured so that you could see through the holes. The holes were to be created by adding individual spheres to the interior of the shape that varied in size. This was Jessica’s idea, and I thought it sounded neat and aesthetic, and also was an opportunity to practice doing some more tedious tinkering in TinkerCad. One thing that made working on this design difficult was that when you zoom out of TinkerCad a certain amount, you stop seeing the interior of the shape. In hindsight, I realize that maybe there is a feature of TinkerCad that could help circumvent this problem. If I work with TinkerCad in the future, I will be sure to ask someone for help with that. I had to leave school right after recitation, but Jessica continued tinkering and ended up deciding to use the honeycomb grid shape generator to create a uniform cut through the entire cube. She made the appointment for the 3D printing (thanks fam, you da best) and we went this past Monday and commenced the 3D printing with Nicholas’ help. Yay.

Except, it was not all yay. 3D printing is not as effortless as it is sometimes made out to be in the media when it is shown off to laypeople with no prior experience with the technology. The reading “The Digital Fabrication Revolution” opened my eyes to many things about 3D printing, of which the most surprising to me is that additive manufacturing has been around since the 1980s. From this reading, I got to understand what a great disconnect there is between the media’s portrayal of 3D printing and the actual tedious process of 3D printing. I got to ask Nicholas a bunch of questions that were on my mind about 3D printing, like what’s the deal with printing organs. I did not realize that in fact a structure is printed into which cellulose material is injected. Previously, I was under the (very erroneous) assumption that the printer was printing a full, completed version of the organ. /*end of organ-printing aside*/

Nicholas helping us send the file to the printer

 

First try

The first iteration of our print was going smoothly until the filament got caught and thus the printer stopped emitting anything. This was sad, but part of the process, and perhaps a good thing ultimately because it gave us a chance to scale down the size of our cube so that it would print faster.

 

Tangled filament does not a successful print make

Second try

The second iteration of our print seemed promising and finished about halfway before we encountered any issues. We checked on the print and found that the filament was on the verge of tangling if we did not intervene. Nicholas tried to pause the print to untangle the filament and then resume it. In a perfect world complete with every blessing from the 3D printing gods, the printer would have remembered the position it left off at and returned to that position when it resumed printing. But this is not a perfect world, and our printer friend did not go back to its previous position.

Third try

Third time’s a charm! After two disappointing but not surprising failures, we started our third print and prayed to the 3D printing gods that they would mercifully allow for a successful print since we all had to go home and couldn’t check up on the progress of the printing for this iteration. The following morning, Jessica and I went to the fab lab and found that in a display of true mercy, the 3D printing gods allowed the print to finish successfully! Yay. Note: I am aware there is literally zero divine intervention involved in the 3D printing process. Anthropomorphizing the machines is done in good fun and awareness of their vulnerability to error.

 

Finished cube!

Now for the quasi-existential question: If I were to imagine an assignment using digital fabrication at IMA in the year 2149, what would be different and what would be similar?

How would 3D printing in IMA be different a bit over one hundred years from now? I anticipate that by this time, something will be done to speed up 3D printing without severely compromising the quality of the print, as right now the time it takes to complete a print seems to be a limitation of 3D printing. I also imagine that despite advances being made, the technology will continue to have limitations, and be prone to some mistakes thus requiring several iterations before arriving at a satisfactory final product. I also imagine (and hope) that by this time, more eco-friendly printing materials will be more ubiquitous. Lastly, as new technologies will have definitely come out by this time, there will probably be things that can supplement or maybe even replace 3D printing as an additive manufacturing method.

Ix Lab Recitation 10: Serial Communication!

This recitation, I decided to go to the serial communication workshop. In fact, I found out that the content of the workshop was the same as the serial communication workshop I attended earlier in the semester. But still, I thought it would be helpful to reinforce those skills and brush up on things that I did not remember. As I am considering sending multiple values from Arduino to Processing for the final project, this was a very useful workshop for me. One unexpected hiccup I ran into was that my Arduino program was causing trouble and not uploading the code to my board. I received an error message that said “collect2.exe: error: ld returned 1 exit status”. When I talked to Nicholas, he said that this problem sometimes happens with the Microsoft Surface devices. Fear not, I thought, I can just check out a MacBook with Jingyi’s help! So I did and was able to complete the circuit and code for the Processing to Arduino part of the workshop.

 

Unfortunately, my computer issue limited the time I had for the Arduino to Processing exercise, so I want to make some time to complete that in the coming week.

One possible application for sending multiple values from Arduino to Processing is to have the user blow on a microphone to represent them creating “wind”, and this wind input could trigger the movement of the clouds on the screen. This would be in addition the input from LDR, which is going to control the brightness of the screen. Thanks to this serial communication workshop, I have a good idea of some base code I can use and build off of for my final project, especially the Processing code. Still, I would like to make time to sit with a fellow or professor and go through some parts of the Processing code line by line to make sure I am fully clear on how every part of the code is functioning.

IxLab Final Project Essay

Interaction, especially in the context of my project, refers to a user’s engagement with something external to their own self, and the subsequent response from this external thing with which they are engaging. Interaction involves some action on the part of the user, and feedback from the device/ apparatus they’re interacting with that lets them know the result of what they’ve done. When a button is pressed or a switch is flipped, the user’s experience should change in some way. Otherwise, it seems like a problem has arisen. Flipping a light switch only to find that your room remains dark usually indicates that the light bulb has burnt out. With dimmer switches, a sliding movement usually triggers a change in the dimness of light. The immediate feedback that the user receives in these experiences plays an important role in making sure the user understands the intended purpose of the interaction.

 

Rather than viewing interaction as a binary (i.e. interactive or not interactive) it is useful to think of things on a spectrum, thus classifying things as more or less interactive. Considering interactivity in degrees helps compare interactivity and allows us to talk about making things more interactive. The interaction between a person and their plant is relevant to critique vis a vis our project. As explained in our midterm project, the problem we identified with caring for live plants is their vulnerability, tendency to die if not constantly tended to, and lack of immediate feedback. Moreover, sometimes the temperature and lighting of an environment simply cannot be controlled enough (in a feasible way) to create conditions that allow the plant to thrive. Of course I am not trying to knock live plants, because I think they are lovely and if your life presents the proper conditions for raising one, there’s no harm in doing so. However, for those of us who still want the emotional satisfaction of caring for a plant, the interaction between owner and house plant can be emulated and reconstructed using sensors and actuators.

 

For our final project, Jessica and I wish to further develop our midterm project, incorporating some new knowledge of Processing we have gained since the project. We have also carefully considered all of the user feedback we got during user testing, and are making changes to much of the design of the flower installation we created in order to make for an interactive experience that is more intuitive for the user. One feedback we received consistently was that the spread and placement of the screens displaying animations was too wide, and led users to focus only on one part of the installation when in fact we had meant for it all to be unified. Thus, I will take Antonius’ suggestions of projecting the sky/environment Processing sketch onto something behind the flower pot. Also with regards to my animation, I plan on making changes so that the display more realistically simulates nature. I will create a kind of base state for the animation that includes some slow movement of the clouds as this more accurately represents nature. The point of our project is to provide the user with the satisfaction of caring for a plant, so it makes sense to emulate nature to make the plant care-taking experience even more familiar to the user. In line with this, we also plan to add rain and wind sounds with Processing to further engage the user’s senses. Additionally, we are considering adding a sensor that when blown/ whistled into will trigger wind of some kind in the Processing animation.

Ix Lab Recitation 8: Drawing Machines

My partner: Jessica Chon

Because of my recent experience during midterms with trying to power too much with my laptop USB and temporarily rendering both USB ports broken, I proceeded with much caution while creating this circuit and making sure every pin on the H-Bridge was connected to the right place. Luis suggested that I check out a Mac since Windows machines are known for relatively weaker USB ports. Not wanting to take any chances, I got a Mac and used that for the programming and powering of the Arduino.

For Step 1, I checked all of my wirings a second time before plugging in my USB, so I was pretty confident that I wasn’t going to explode anything. The motor worked when I powered it on and uploaded the code. Yay. Paying close attention in class on Thursday was super helpful to make this process less intimidating.

As per the Step 2 instructions I added the potentiometer. This was scary at first but then quickly not as scary when I realized that a) I did not have to change any of the H-Bridge wiring and b) all of the code was in the Arduino examples.

Using actuators, in this instance, two stepper motors, was a rewarding experience. When I first read the recitation instructions earlier in the week, I was so sure I would not be able to finish the assignment successfully within the class time we were given. But Jessica and I were able to finish with time for taking a look at the work of our peers. I wish we had more time to play around with the code and experiment with different speeds for the motors to see what kinds of different drawings we could make with the drawing machine. Because we were not doing more specific programming, I felt like the results we were producing were quite arbitrary. It was quite a happy coincidence that our first drawing actually resembled the painting “Girl with a Pearl Earring” by Johannes Vermeer https://www.britannica.com/topic/Girl-with-a-Pearl-Earring-by-Vermeer

“It looks like a bunch of awkward 8’s” “They look like fishes” Hence the name, 88 Fishes

Sometimes, when you repeatedly turn potentiometers with literally no planning, you just might end up with something resembling a Dutch masterpiece.

I am anxious to incorporate processing coding into drawing machines. My basic idea was to have the mouse position affect the movement of the drawing machine, though I find this quite boring.

The installation that most intrigues me from the reading is Hysterical Machine, 2006 by Bill Vorn (pp. 128). I find it quite uncanny that the robots give off the sense that they have their own internality as a community of semi-autonomous, yet human-programmed and ultimately human-controlled, machines. I looked up pneumatic motors, the actuator used in this installation, and found that they work by expanding compressed air (https://en.wikipedia.org/wiki/Pneumatic_motor). It seems that the advantage of these motors is that they are safer since they don’t require electric power and thus don’t create sparks. Because safety is a prominent concern among people who are wary of the development of autonomous and semi-autonomous machinery and robots, it makes sense that as a safety measure the creator and artist here, Bill Vorn, would opt for something relatively less dangerous like the pneumatic motor. Quite obviously this installation has a significantly higher degree of autonomy than the drawing machine we created. This owes to many things, the most evident being the complexity of the sensors and actuators, and the way the cameras and other sensors on the robots read multiple inputs from the humans interacting with the installation in order to control the movement of the actuators.

IxLab: Midterm!

The Kill Me Not Flower

Partner: Jessica Chon!

Photo creds: Antonius

Story Time- Inspiration to ideation

Jessica came up with the idea and concept of creating a project with plant that could be cared for with water and light, though with the caveat that this would not be a living plant. In her initial research, Jess found several articles about people caring for fake plants, namely this one: https://www.huffingtonpost.com/entry/late-wife-pranks-husband-plant_us_5a609e8be4b01f3bca58cd9b along with a podcast which speaks to the value of nurturing something that does not necessarily require our nourishment: http://www.distractionpodcast.com/2017/07/20/s2-mini-9-watering-fake-plants-isnt-always-a-waste-of-time/
Thus, we realized how a fake plant could be useful and meaningful for us personally. At the beginning of fall 2017 semester, we went to Carrefour together to get some cheap shelving for our room and wanted to buy a plant to bring some life to our room. We fell in love with this lovely green fella and named him Scott

Around the time right before the Mid-Autumn festival, Scott started to look a little sad. Sad in the context of Scott meant drooping leaves and slight discoloration. When we left Shanghai for the October break for 10 days, we had no one who we could ask to take care of Scott. We came back to an even more sad (but not totally dead) Scott. To make a long story short, by the end of November when the weather had become quite cold in Shanghai, Scott was very dead and beyond saving. Our two small cacti purchased in Seoul added a bit of life to the room, still, but Scott’s lack of life outweighed the contributions of Susie and Jared.

Read into our anthropomorphization of our plants however you care to, but the salient takeaway with regards to our project was that we enjoyed caring for Scott and got satisfaction out of seeing him flourish, but realized with our hectic schedules and the unstable temperature of our dorm room, it was too tall of an order to try and keep things alive.

 

From some ~feelz~ about plants to an actual project

This is where our project comes in! Through prototyping, sketching, brainstorming, and getting feedback from fellows, a Xinchejian hacker, and professors, we conceived of the Kill Me Not flower. Jessica had the idea to incorporate animations from Processing as a way to give the user of the Kill Me Not some feedback about their plant caretaking. We divided the work so that I was responsible for the servo motor that rotated the plant, the LDR sensor whose input values controlled the rotation of the plant, and the Processing sketch of a sky background which would change based on the LDR input value. Jessica coded the moisture sensor, a servo motor which would make the plant wilt or stand up straight based on the moisture sensor value, and a Processing sketch that changed based on the moisture sensor value.

The day of the Xinchejian talk was the day Jessica and I began building the circuits for our project and coding to get the physical components working. Originally, I thought I was going to need two light sensors to control the rotation of the flower, so my first circuit included two LDRs and a servo motor. After sketching out several versions of the flower and trying to imagine how the rotation would work based on the input values, it was concluded that one sensor would probably be sufficient for controlling the rotation of the flower. One rather helpful insight from a fellow was that the LDR did not have to be on the flower. I don’t know why I had previously really thought that the best place for the LDR was in the middle of the flower. Once I started sketching out the placement of the LDR keeping in mind that I could place it somewhere separate from the flower, I became much more clear about how the rotation could work. Even if I ended up changing my mind and adding a second LDR to control the position of the servo motor, I was pretty sure about only needing one LDR value to send to Processing.

 

 

Processing- iteration, frustration, and other -ations

Jessica created the non-animated initial state the Processing sketch, which I used as the basis for my design of the sky and the subsequent animation.

The goal was for all of the colors on the screen to increase in brightness with an increase in the LDR input, so I set the colorMode to HSB. I talked to Luis about colors and how to achieve the brightening I wanted, and serendipitously he was in the middle of some work focused on colors. He gave me this website (https://programmingdesignsystems.com/color/a-short-history-of-color-theory/index.html#a-short-history-of-color-theory-xZzRFOZ ) to read about color theory, which helped a lot in understanding how hue, saturation and brightness affect a color.

My next task was adapting the sky and sun to fit the size of my display. The sun was originally made up of four quadrilaterals and an ellipse. I knew that in the end I needed the sun to rotate, and while I knew I could use pushMatrix() and popMatrix() to accomplish this, I was worried about keeping these five shapes together. I spoke to Nicholas about this and he recommended I take a screenshot of the original sun and use Photoshop crop that sun and put it into my sketch as a png. This made my sun into one single object– yay! A new issue arose, though– I didn’t know how to put images into Processing. Luckily, I was able to easily find some help on the Processing website, particularly from these links:

https://processing.org/reference/image_.html

https://processing.org/reference/loadImage_.html

Next I dealt with the clouds. I had recently watched this 10-minute coding challenge on The Coding Train (https://www.youtube.com/watch?v=17WoOqgXsRM&t=201s ) which used Object Oriented Programming (OOP) to make a starfield. I knew that I needed to learn how to use this to most efficiently create and animate multiple clouds. I asked Nicholas to direct me toward resources that could help me learn about OOP, and he sent me to https://processing.org/tutorials/objects/ . After reading this, I was able to create a class in my programming for Clouds, within which I created separate functions for drawing the cloud, moving it right and moving it left. Once I accomplished this, Nicholas commented that my clouds did not quite look like clouds. He showed me an example of code he made where he combined a bunch of overlapping ellipses to create a cloud shape in Processing. I opted for using a cloud png from online https://pngtree.com/element/down?id=NjgwMTE3&type=1

To save myself some time. Then, since I needed to make multiple clouds using OOP, I had to figure out how to use an image in a class. This link was my main resource for that: https://forum.processing.org/one/topic/how-to-use-pimage-in-a-class.html

As I mentioned earlier, I wanted the sky and everything on the screen to change brightness based on the value sent to Processing from the LDR. I talked to Tristan about this, and he told me a nice little hack, which was that I could put an opaque rectangle over my entire screen and use that to alter the brightness. Originally, I made the opacity dependent on frameCount, but I needed it to change based on the LDR value. I attended the Serial Communication workshop (thanks Luis and CiCi for your review and teaching about multiple values!) and got a good handle on how to send the Arduino value to my Processing sketch. I ran into an issue after getting them connected, though: the LDR value was constantly fluctuating, so the change in brightness was happening constantly and created a flickering effect. Cool in its own right, but really not the effect I was going for, so I spoke to Antonius about how the heck to make it not flicker. And thus I was introduced to linear interpolation, aka lerping, which seems to be quite a cool and powerful thing for programming. Antonius had me look up lerping, and I opened this: https://processing.org/reference/lerp_.html , which was of little help to me given my lack of understanding of lerping, but it gave Antonius an idea of how to incorporate lerping in a very simple way to accomplish what I wanted in my code. The function of lerping in my code was to make several ranges of values and within those ranges, we used a float variable called ‘c’ to force the value to keep returning to one specific value, and behave based on that forced value.

After finally finishing all of the coding, we built the housing for our project and the flower itself. What an exciting (and stressful) time this was. Kudos to Jessica for a lovely window and flower design. I did the flower pot base and cut holes out for the water to go through to the sensor.

 

In the end, this was the video we showed for our presentation in class 🙂

 

Lessons learned

Jessica and I budgeted ourselves a lot of time for this project. I thought that we really had given ourselves an ample amount of time. But I learned that since we do not plan for problems, we cannot predict when they will happen! Of course, here I am referring to the fact that I tried to power a new servo motor with my computer USB port and paid attention only to the voltage and not current, thus rendering both of my computer’s USB ports useless.

Racing against the clock in our humble Jinqiao abode

Trying to change to a more powerful Servo, only to go and break my USBs 🙂

Not to mention this little snafu occurred around 2am the morning that our project was to be presented in class. We went to the mac lab to try and make a running version of our project so that we could have a video to present in class. This worked but was ~not aesthetic~ and really a far cry from the final result we hoped for!!!

This was a horrible time, resulted in no sleep, and made me realize that:

  1. Coffee is a precious thing and I credit it for my survival
  2. Arduino can give 5V to my servo, but I better power motors and other outputs with an adapter next time so that I don’t harm my USB ports (or the components).

User Testing

 

Below are my notes about user feedback we received:

  • Light interaction (moving between levels) is not clear without some explanation.
  • In the first several interactions, users were unsure if the light input was making the plant perk up or making it turn around. This probably has to do with the values for the light sensor being fickle
  • One of the first users said that the purpose of adding water was clear but adding light was confusing…I think we improved this as we went along by giving an explanation about the light
  • Nimra and others said they were unable to see the sky screen when it was behind the window, so we moved it to the side. The problem that arose with this was that there became too much for the user to try and pay attention to
    • Regarding this, Antonius suggested that we get fabric, hang it behind the flower, and project the Sky sketch onto that fabric. This would make it easier to see.
    • In the same vein, Rudi suggested that we move the grass screen, perhaps in front of the flower/ window, so that the experience would be even more immersive
  • Antonius said we do not need 2 arduinos! This makes sense. But it also confuses me, since we are sending the Arduino values to two separate processing sketches. Is there a way to combine what’s in processing? I am not sure, but I am inclined to think not
  • Rudi said that the syringe for watering the plant completely didn’t work for him, and that it was very weird
  • Antonius (and others) said there was something really enjoyable about the plant turning around to face them
  • Antonius said he would like for there to be a day and night for the sky sketch, and not to have total control over what was happening on the screen all the time. In other words, the default state would not just be stillness on the screen, but some kind of animation mimicking nature

Notes on a project I tested:

One project that left quite an impression on me was the Minion eye-testing project. I felt quite immersed in the experience and really did not focus on the fact that it was an Interaction Lab midterm while I was using it. I wear glasses and have taken several eye exams before, so I was able to intuitively understand what I was supposed to do. I might have liked the buttons to be a bit closer together, but overall the function of the buttons was pretty clear to me. I just observed the letter on the screen and clicked the button that corresponded to what I thought I saw. As I moved forward, the letter got smaller and smaller, and thus it got harder to identify which way it was facing. Another suggestion I have for the project is for them to create a more immediate feedback for the user when they fail the test, instead of the user ending the test on his or her own.

 

 

Arduino Code:

#include <Servo.h>
int sensor1 = 0; //connect light sensor to A0
int sensorValue = 0;
int val1;
int timer = 0;
Servo myservo;

void setup() {
// myservo.attach(3);
Serial.begin(9600);
// myservo.write(10); //so that the servo always starts at 10

}

void loop() {
val1 = analogRead(sensor1) / 4;
Serial.write(val1);

if (val1 > 200) {
timer = 1000; //delays are bad!!! this is better
}
if (timer > 0) {
myservo.write(170);
} else {
myservo.write(10); //how to put a delay on this homie so it waits
//5 seconds before turning back around
}
timer–;

//myservo.write(180);
}

import processing.serial.*;

Serial myPort; 
String myString = null;
int val1;
PImage Sunimg;
PImage Cloudimg;

Cloud myCloud1;
Cloud myCloud2;
Cloud myCloud3;
Cloud myCloud4;

int SUNSIZE = 679;
boolean growing = false;
int growtime;

boolean cloudparting = false;
int parttime;

float c;
float g;

void setup() {
  size(displayWidth, displayHeight);
  printArray(Serial.list());
  val1 = 1;
  myPort = new Serial(this, "COM6", 9600);
  Sunimg = new PImage();
  Sunimg = loadImage("SunCropped.png");
  Cloudimg = new PImage();
  Cloudimg = loadImage("MidtermCloudPNG.png");
  myCloud1 = new Cloud(width/2, (height/2)-100, 1);
  myCloud2 = new Cloud(width/2, (height/2)+100, 1);
  myCloud3 = new Cloud(width/2, (height/2)-300, 1);
  myCloud4 = new Cloud(width/2, (height/2)+300, 1);
  c = 120;
  GrowSun(0.3, SUNSIZE);
}

void draw() {
  while (myPort.available() > 0) {
    val1 = myPort.read();
  }

  Sky(); //draw the sky

  Sun(); //draw the sun

  //display my clouds 
  myCloud1.display(); 
  myCloud2.display();
  myCloud3.display();
  myCloud4.display();

  //println(val1);
  if (val1 > 182) {
    if (!cloudparting) {
      //make it false so it only starts the trigger once
      cloudparting = true;
      parttime = frameCount;
      //start time from when the clouds begin parting
    }
    myCloud1.goright();
    myCloud2.goright();
    myCloud3.goleft();
    myCloud4.goleft();
  }
  if (val1 > 190 && frameCount > parttime+30) {
    if (!growing) {
      growing = true;
      growtime = frameCount;
    }
    g = map(frameCount, growtime, growtime+120, 0.3, 8); //possible offender for slow frame rate
    GrowSun(g, SUNSIZE);
  }
  if (val1 > 190 && frameCount > growtime+140) {
    SpinSun();
  }

  //float c = map(val1, 120, 255, 155, 0);
  if (val1 < 150 && val1 >= 120) {
    if (c <= 119) {
      c++;
    } 
    if (c > 120) {
      c--;
    }
  } else if (val1 < 120) {
    if (c < 230) {
      c++;
    }
  } else {
    if (c > 0) {
      c--;
    }
  //  //lerping in order to smooth the transition of the colors
  }
  //println("c= " + c);
  fill(10, c);
  rect(0, 0, width, height);
  //println(frameRate);
}

//cloud tab:
class Cloud {


  float xpos;
  float ypos;
  float xspeed;

  Cloud(float tempXpos, float tempYpos, float tempXspeed) {
    image(Cloudimg, xpos, ypos);
    xpos = tempXpos;
    ypos = tempYpos;
    xspeed = tempXspeed;
  }
  void display() {
    image(Cloudimg, xpos, ypos);
  }

  void goright() {
    xpos = xpos + 3*xspeed;
    if (xpos > width-200) {
      xspeed = 0;
      //xpos = width;
    }
  }

  void goleft() {
    xpos = xpos - 3*xspeed;
    if (xpos < 200) {
      xspeed = 0;
    }
  }
}

//Sky tab:
void Sky() {
  fill(85, 152, 219); //dark sky
  rect(0, 0, width, height);
}

//Sun tab:
int rotation = 0;
int w = 300;
int h = 296;
//float growth;
//int maxsize = 

//0 makes the image growth proportional
//

void Sun() {
  pushMatrix();
  imageMode(CENTER);
  translate(displayWidth/2, displayHeight/2);
  rotate(radians(rotation));
  tint(255);
  image(Sunimg, 0, 0, w, h);
  popMatrix();
}

void GrowSun(float g, int maxsize) {
  if(g > 8) {g = 8;} //ignore huge growth numbers
   else {
     w+=5;
     h+=5;
   }
  //Sunimg = loadImage("SunCropped.png");
  //Sunimg.resize(0, int(growth*maxsize));
  //cast as an int because growth is a float
}

void SpinSun() {
rotation++;
}


//for spin sun, increase rotation
//make a new rotation variable

IxLab Week 7: Guest Talk

I found the talk last week from the hackers from Xinchejian to be quite insightful and inspiring. While Eduardo Alarcon was speaking, he made a really great point about the Internet of Things (IOT)  being missing in education. It made me wonder how many other technologies and concepts are mainly relegated to the world of big businesses and major industry, and yet have the potential to be made accessible to many more people. One thing I wonder about the TOKYmaker is this: given the modularity of the components that TOKY uses for coding and the fact that they look like puzzle pieces, could the pieces from TOKYmaker’s code be made into physical toys and introduced to children at a young age? In places where smartphones are ubiquitous, many parents are hesitant to give them to their very young children. But perhaps creating a physical version of the code components could eliminate the need for a screen altogether and allow kids to integrate something coding-based into their play.

After the talk, Jessica and I talked to Andy Garcia about our project. We were particularly interested in talking to Andy because his work involves urban farming, and thus he knows a great deal about plants. Even though our project is based on a fake plant, we wanted to talk to Andy to get some ideas about ways to attract a user to interact with our project. He told us that one of the signs most visible to plant owners of their plant lacking nutrients is if the plant is wilted and drooping. This led us to reconsider how important it would be to have a servo motor that could move the plant so that it goes from drooping to standing upright. Andy emphasized to us that with whatever we made, we should start simple, then scale eventually and add more to make it complex.

Ix Lab Recitation 6: Serial Communication!

Step 1

What I set out for was to control two components of the RGB LED color based on the movement of the mouse in the X and Y direction. I talked with one of the fellows about this and he suggested that since this would require controlling multiple outputs, I should just focus on controlling one component of the light (either red, green or blue) using one direction of the mouse movement.

First I made sure that the light would turn on just focusing on the Arduino code. As I have never used the RGB LED before, I wanted to be confident I did not have any circuitry errors before going forward and creating some Processing code to control the little LED friendo. Also, this LED seemed significantly more precious to me than the red, blue and green ones since our kits only come with one RGB LED.

At first, the light was not turning on. So sad. I was confused. I thought I was doing it right. But, thanks to the help of a fellow (I’m sorry I forget your name :/) I was made aware of the fact that I had forgotten to connect my breadboard to power! It was quite a simple fix, and afterward, it was finally, literally, lit. Lesson learned: do not overlook the most basic aspects of something when trying to fix it. Think: Y2K example referenced in class.

After getting the LED to turn on, I moved onto the Processing code. I followed the example from Antonius’ Class 12 notes slides for sending code from Processing to Arduino. This was of great help in structuring my code. Still, the code was not working. Sad story. I checked my Arduino code again

A fellow (same one whose name I forgot earlier) pointed out to me that I accidentally put the myPort before the void setup(), so nothing was happening because the port was never initializing. Because I had declared mouseX as an int, my mouse movement was not affecting the output at all.  This gave me some sads, but I asked for help and was able to solve the problem. mouseX is already a variable, so I was confusing the computer by declaring it as an int. This became obvious when I added println(mouseX) to my code and saw that mouseX was remaining constant. Alas, commenting out my previous int mouseX made for functional code and a happier Maudie.

Arduino Code:
char valueFromProcessing;

void setup() {

Serial.begin(9600);
pinMode(9, OUTPUT);
// pinMode(10, OUTPUT); 
// pinMode(11, OUTPUT); 
// can include 10,11 in future iteration when I know how to control multiple outputs
}

void loop() {
digitalWrite(10, HIGH); //so that green will always be on
digitalWrite(11, HIGH); //so that blue will always be on



if (Serial.available()) {
valueFromProcessing = Serial.read();
analogWrite(9, valueFromProcessing); //control amt of red light with Processing value

}
}

Processing code:
import processing.serial.*;
//int r;
//int g;
//int b;
//int mouseX;
//int mouseY;

Serial myPort;
int valueFromArduino;



void setup() {
myPort = new Serial(this, "COM6", 9600);
size(255, 255);
background(0);
}

void draw() {
myPort.write(mouseX);
//tell arduino mouseX to pin 9
//i want mouseX value to control the r value
//i want mouseY value to control the b value
println(mouseX);
}

Step 2

I wanted to use a potentiometer because I enjoy the interactivity of it despite the fact that it’s somewhat awkward to use. Originally, I wanted to make the movement of the potentiometer grow and shrink a circle on the screen. I could have done this by having the input value for the potentiometer increase and decrease the width of the circle

Although, I thought it might be neat to create a code that changes the position of the ellipse based on the potentiometer, so that you could “roll” a ball across the screen– I was not sure how I could create a “rolling” effect, but basically, I wanted to write the Arduino input from pin A1 to the x position of the ellipse.

Again, for coding, I went to Antonius’ Class 12 slides to look at the samples for sending an input from Arduino to Processing. Following that as a model for the basic parts of the code, I knew from there how to code so that the value from my potentiometer would become the x position of the ellipse on the screen. I had already taken care of dividing the value by 4 in my Arduino code since the input values range from 0 to 1023 and I needed them to be from 0 to 255.

When I first ran my code, it worked, but the circle left a trail as I moved it around the screen, meaning that every time Processing received a new input from Arduino, it was drawing a new circle. I wanted it to appear as though there was only one circle, and that that single circle was changing position along with the movement of the potentiometer.

Thank you to Jessica, my roommate/true homie/superior coder, for showing me to put my background in the draw loop so that the circle would not leave a trail. By doing this, Processing would now draw a new background every time the draw loop ran again, effectively erasing the last circle. Yay.

Then I had to deal with the fact that the screen was blinking all the time, which was quite annoying, but Rudi helped me with it. After that, Rudi started showing me some changes I could make to my code to make the animation more “spicy”, and in my excitement over the spice, I forgot to write down what exactly the solution was to the constant blinking. I am going to try and go back to figure out what was making it blink so that I can be cognizant of this in future coding.

One thing I noticed was that the movement on the screen was “opposite” the movement of the potentiometer. When I turned the potentiometer to the right, the circle moved to the left. My first instinct was to place a negative sign in front of the x position, reversing the opposite-ness that was occurring. Rudi suggested that I use a map function instead, which had the same effect but also made it easier for me to see what was going on with my code, I think.

Rudi also showed me that I could use the cos function to make the circle move in an arc. I wanted the fill color of the ball to change constantly, so I declared a float variable that I could use to change the red and blue component of the color based on the value being read from Arduino.

Arduino Code:
void setup() {
 // put your setup code here, to run once:
 Serial.begin(9600);

}

void loop() {
 // put your main code here, to run repeatedly:
 int sensorValue = analogRead(A1) / 4;
 Serial.write(sensorValue);


}
Processing Code:
import processing.serial.*;

Serial myPort;
int valueFromArduino;
float xPos;
int diameter;

void setup() {
 size(800, 500);
 diameter = width/5;
 printArray(Serial.list());
 myPort = new Serial(this, "COM6", 9600);
}

void draw() {
 while (myPort.available() > 0) {
 valueFromArduino = myPort.read();
 background(255);
 xPos = map (valueFromArduino, 0, 255, width-diameter, diameter);
 float //could also do this up above for yPos
 yPos = cos (map (valueFromArduino, 0, 255, -2*PI, 2*PI))*height;
 ellipse(xPos, yPos, diameter, diameter); //added a negative at one point 
 //so that the movement on the screen would mirror the rotating of the potentiometer
 float ballColor = map (valueFromArduino, 0, 255, 0, 255); //mapping not necessary
 fill(255-ballColor, 0, ballColor);
 }
}

Step 3

What are the possibilities for interaction that you can envision?

After working with the potentiometer today, I can imagine using other analog inputs (distance sensor, moisture sensor) to produce animations on the screen. One thing that comes to mind is using a pressure sensor (or something better for this, though I’m not sure what it would be) that someone could blow on and on the screen, there would be a pinwheel whose rotation was controlled by the amount of force blown on the sensor.

 

What are some of the benefits of controlling physical elements with computational media and of controlling screen-based computational media through physical interaction?

When controlling physical elements with computational media, we produce physical changes remotely. This means that relatively less physical movement (i.e. mostly coding) is needed to produce a physical output. Controlling physical elements this way makes for somewhat less work for the operator. As for controlling screen-based computational media through physical interaction,  this creates new interfaces for our communication with the computer beyond keyboard and mouse. While keyboard and mouse allow us to do a lot, they are quite limited compared to the vast range of motions and actions that we as humans are capable of performing.

 

Can you think of real world applications that take advantage of communication between the physical and screen-based computational media?

I saw some really incredible displays of physical computing at the Olympics in the Samsung pavilion. They had several different VR experiences, two of which had a rather high degree of interaction. The one I got to try was a skiing game where you stood on an apparatus that allowed you to perform a movement that mimicked slaloming. Each person had their own large screen in front of them which displayed a skier who was moving through checkpoints down a ski slope based on your own movement on the apparatus. The snowboarding one was a bit cooler, I think, but had a significantly longer wait time. From what I observed, participants stood on a device that looked like a snowboard and put on VR glasses. Their movements on the “snowboard”, while obviously confined to a space of roughly 2 square meters, were sent to the virtual reality display and altered, in real time, the path they were taking on the snowboarding course.

Here’s an article from Samsung with some photographs of their VR experiences: https://news.samsung.com/my/olympians-from-around-the-world-visit-samsung-olympic-showcase-in-gangneung-olympic-park-during-olympic-winter-games-pyeongchang-2018

IxLab: Animation in Processing!

A huge thanks to fellow Tristan who talked me through the use of the for() function and helped me figure out how to write my code so that the chips would stack. Here’s a picture of the notes Tristan wrote/ sketched while we were talking:

The main focus of the notes was understanding what variables I needed and how to use some simple math to produce the result I wanted.

These are links to the resources that helped me develop my code:

I used both of these to figure out how to incorporate scale() into my code.

https://processing.org/reference/scale_.html

https://processing.org/examples/scale.html

 

Original Image:

Image Source: https://itunes.apple.com/ca/album/lotto-the-3rd-album-repackage/1143527580

Step 2:

Originally, I wanted to recreate EXO’s logo, which is hexagonal in nature and has changed over the years. I wanted to create an animation that started as their original logo and then morphed to show the progression of the logo over time. After searching online, it seemed like working with a hexagon might be a little complicated. I asked Tristan, and he said that because of how I wanted the drawing to be able to move, it would make the most sense to create a bunch of individual lines so that I could move them separately. If I created a hexagon, all of the sides would be attached and it would not be practical to manipulate the shape in the way I wanted to. I looked back at EXO’s various album covers and saw the cover for the 3rd repackaging of their 3rd album, EX’ACT (shown above). I decided, then, to try and create an animated version of this image in which the poker chips stack up in succession and then fall down/ explode. The first step was to create one chip, which wasn’t terribly difficult. Then, I had to figure out how to get some chips (i.e. ellipses) to stack on top of one another in succession. I was pretty stumped at first, so I sat and thought for a while, asked Antonius for help, and did some more staring at my screen and thinking. I figured out that I needed to use a for() function in order to create the animation I was after and to have the stacking of the chips stop after a certain amount. After recitation, I went straight to Tristan’s office hours to get help with continuing the code. As Antonius said, we had not gone over for() functions in class yet, so I had to somewhat teach myself (and rely on help from professors/fellows). For() functions were included in the reading that I did, though, so I felt I had a basic understanding of how they worked.

Working with Tristan, I was able to successfully get the chips to stack. After that, I wanted to get the chips to fall down (or explode or something). He sent me to try and figure it out, telling me that I needed an “else” after my for() function. I pressed on but admittedly was a bit confused. I asked Antonius for help a little later and he guided me with some questions to use println() to look at three of the variables in my code and try to make sense of why they were behaving the way they were. After I ran the code several times, looked at the different values being printed, and then looked back at my code, I realized that the place where, for example, “numChips” was being declared meant that the value would continuously increase, regardless of the chips on the screen disappearing. It was a kind of “misnomer” as Antonius called it. This closer look at my code was really helpful for me in getting a stronger hold on how different parts of my code were working and what changes I needed to make to produce the result I wanted.

In the end, I struggled to try to figure out how to get the chips to all fall down and did not succeed in creating exactly the kind of animation that I wanted. However, the chips in my final code do fly up to the top of the screen, slowly getting smaller (with the use of scale()). Ideally, I would have liked for the chips that had already stacked to remain on the screen and have those chips be the ones that moved, but I couldn’t figure that out on my own and was unable to make it to the studio to get help.

 

QUESTIONS:

  • What are the differences and similarities between the image you chose in Step 1 and the image you created in Step 2?

The image I chose in Step 1 is the album cover of EXO’s 3rd repackaging of their 3rd album, titled “EX’ACT”. What initially made me think of using one of their album covers was my remembering the hexagonal nature of their logo. The image I created differs from the one I chose in that I did not replicate the details on the coins of the album cover. As seen below, the coins on the album cover have some more detail to them to make them look like poker chips. However, my drawing, though it initially becomes a stack of chips, goes on to become an “explosion” of chips, which the album cover clearly does not do since it is just an image without animation.

 

  • Do you think that drawing in Processing is different than drawing/painting by hand? How is it different? Why is it different?

My first instinct is to say “absolutely yes, it’s completely different”, because right now I could probably draw by hand a lot faster than I can draw something in Processing. Also, when we draw with ink on a non-virtual surface (i.e. paper, canvas) that which we want to draw appears before us pretty much instantly. It takes some time to complete the whole picture, but each individual stroke is made visible as we create it. Processing is different because the computer is not a human, and needs a full and readable set of commands presented to it in order to execute a certain stroke or shape. As we are coding a drawing on Processing, we can repeatedly hit ctrl+R in order to check our progress, but it’s not quite the same as having the result of your work immediately show up as you create it. To be fair, as I am quite new to this whole Processing thing, the coding I am doing still involves some trial and error, and I would bet that people with more experience can look at some code and translate in their imagination what the resulting image/ animation will look like. I can presently do this with basic things, like positions and colors and shapes, but to imagine how an animation will play out takes a little extra brain power.

 

  • Is there any value in recreating an existing image/drawing/graphic using computational media? What is it?

Yes, there is certainly a value in such a recreation. For one, it makes the original art accessible to those who otherwise might not be able to see it. Recreating the image or drawing using computational media also allows us to do fabulously fun things like animate the image and continue to add new layers of meaning to it. Also, when use a medium like paint or pastels, you are quite limited in your ability to undo the choices you make. This is not the case with coding. One great advantage is that we can comment things out using ctrl+/ to temporarily see how our code runs without the line(s) we’ve commented out, and then we can just as easily put those lines back in.

 

  • Do you think that both drawings cause the same feelings in the viewer?

If the drawing is made to such an extent that it very much resembles the original, and it is presented in a similar context, I think it can cause the same feelings in the viewer. In my case, perhaps a loyal EXO-L (aka, an EXO fan) would see the drawing I created and immediately be reminded of the EX’ACT album cover. But, without the smaller artistic details from the album cover, the viewer might not recognize the image as being related to EXO. Also, if the viewer has no familiarity with EXO in the first place, there is definitely nothing in my drawing that would make it evident to them that my animation is related to a Korean boy group.

 

  • If you were to create multiple variations of your drawing, what would these variations be and how would they differ?

Right now, the drawing does incorporate any interaction. If I made more variations of the drawing, I would try to add different kinds of interaction. I think it would be fun if the coins required a button click to be stacked, so each time you pressed the mouse another coin would be added. I would also create a variation in which when the chips all stacked up, EXO’s song “Lotto” started playing. I would also create a variation that incorporated random variables so that the frameRate and number of chips could be different in every iteration of the code.

int circleX;
int circleY;
int numChips;
int interval = 10;
int MAX_CIRCLES = 25;
float r = random(0, 10);

void setup() {
  size(1000, 800);
  frameRate(60);
  circleX = width/2;
  circleY = height;
}

void draw() {
  background(255);
  stroke(255);
  fill(0);
  ellipse(circleX, circleY, 120, 60);
  numChips = (frameCount/interval) + 1;
  if (numChips <= MAX_CIRCLES) {
    for (int i = 0; i < numChips && i < MAX_CIRCLES; i++) {
      ellipse(circleX, circleY-(i*30), 120, 60);
    }
  } else {
    for (int i = 0; i < numChips && i < MAX_CIRCLES; i++) {
      frameRate(60);
      translate(CENTER, height); //start the black floating circles from CENTER
      fill(0); //black
      scale(.8);//change size of original circle by this amount everytime, eventually they
      //become so small you can't see them
      ellipse(circleX, circleY+(i*30), 120, 60);
      circleY -= 1;

      translate(CENTER+50, height); 
      fill(0); //black
      ellipse(circleX, circleY+(i*30), 120, 60);
      circleY -= 1;

      translate(CENTER-50, height); 
      fill(0); //black
      ellipse(circleX, circleY+(i*30), 120, 60);
      circleY -= 1;

      translate(CENTER+100, height); 
      fill(0); //black
      ellipse(circleX, circleY+(i*30), 120, 60);
      circleY -= 1;

      translate(CENTER+150, height); 
      fill(0); //black
      ellipse(circleX, circleY+(i*30), 120, 60);
      circleY -= 1;

      translate(CENTER-100, height); 
      fill(0); //black
      ellipse(circleX, circleY+(i*30), 120, 60);
      circleY -= 1;
    }
  }
  println(circleX);
}