Interaction Lab Final – Nap Box – Kathy

Interaction Lab Final Project (Spring 2017)

Instructor: Moon

Collaborator: Esther Liu

Ideation

For the final project, I decided to make a nap box that comforts people and connects to the environment. Unlike the midterm project where we clearly knew what the final piece should look like at the very beginning of our ideation, our final project was absolutely not the case, we kept changing and developing our ideas while we were making things.

Originally we simply wanted to do real-time visualization of surroundings through a simple robot to address environmental concerns. However, it was beyond the course scope and our current skill set. So we started thinking what about creating something that is immersive and can connect the person with the environment? Meanwhile, we noticed that many of our friends are complaining about constant pressure, lack of sleep and concentration in the busy final weeks.  After discussions with Moon and other friends, Esther and I finally decided to make a nap box that can not only enable them to temporarily get away from hustle and bustle and recharge themselves, but also arouse people’s awareness and appreciation of the environment they are in through real-time animation, which actually goes back to our initial idea.

Materials

  • cardboards
  • acrylic paint
  • joystick
  • software: processing, Arduino, illustrator

Process

Part I: Programming

We prepared two modes for the animated effect that the user can see and play with when they lie down and star at the iPad screen above. For the first mode (starMode), the user can create their own star field through joystick interaction, as well as the sensor values received from the environment, such as brightness and loudness. My first step was to create a simple star field without the involvement of any human and environment interaction. The star field part is a little bit tricky, because processing syntax is different from p5.Js, so I actually did some research online (Daniel Shiffman’s tutotials are so useful!), wrote and tested the star class in p5.Js first and then “translated” the codes into processing. I am glad that it worked!

Screen Shot 2017-05-19 at 2.10.25 PM

Screen Shot 2017-05-19 at 2.54.29 PM

The second step was to add different sensorvalues to the existing codes, so the star field does not just move randomly, but in accordance to the surrounding environments such as brightness and loudness, as well as joystick control. You can view the star field as a reflection of the environment around you, and you can intervene the visualization through joystick. To visualize the surrounding environments, processing sound and video libraries are necessary. It was not as challenging as I expected to study the libraries and get the volume and average brightness values from laptop mic and webcam. However, we encountered a problem on sending multiple values from Arduino to Processing. We asked Moon for help and solved the problem.

Screen Shot 2017-05-19 at 2.54.12 PM

Moreover, we added a sound file to processing sketch to create a meditative feeling and make the sound visual effect more appealing. In a word, the average brightness of the environment controls the brightness of the stars, the loudness of the environment controls the speed of star movement, joystick controls the perspectives of the star field through 4 values (up, down, left, right).

For the second mode (randomMode), I got inspiration from the screen savers on our laptop. It has similar functionality with the starMode, but with different visuals.  At first I did not know how to create the visual, the reference I found online helped a lot. In this mode, brightness controls the shape of the pattern, loudness controls the size, while joystick adjusts sound volume (up + down) and pattern opacity (left + right). You may also stop and resume the music by clicking the joystick. I spent a lot of time adjusting the parameters (rad, shap_ind) to improve the visual and make the sensor values fit into the control.

Screen Shot 2017-05-19 at 2.54.35 PM

Here is a pattern sample look.

Screen Shot 2017-05-19 at 2.10.32 PM

Switching modes is another small issue we faced. After I created the two modes, I found that even when I used different keys to call each mode function, two modes always add onto each other in display rather than replacement. Later I figured out I should not put the key detection function in draw loop, instead, I need to create a separate keyPressed function to reset the background and introduced a boolean value mode to control mode switch. Below is the correct code:

Screen Shot 2017-05-19 at 2.54.03 PM

 

Part II: Physical Installation 

We designed the nap box shape in Adobe illustrator and got five individual pieces through the laser cutter. After gluing them together, I covered it with sketch papers and used the acrylic paints to color the surface. I waited it to dry and added more layers to deepen the color. Finally I sprayed the box to prevent losing color. We also created a small box to contain Arduino and joystick, because we want to allow the user to interact with the visual effect through joystick.

Screen Shot 2017-05-19 at 2.09.58 PM

This is the way to use our nap box!

Screen Shot 2017-05-19 at 2.10.06 PM

The view inside the box~

Demo:

PPT Slide: https://drive.google.com/open?id=0B_DiZA6WY5uOU2IxQUVqWm9uYmc

Code download: https://drive.google.com/open?id=0B_DiZA6WY5uOZHZERTREbHZVSlk

Technical Summary

  • serial communication (multiple sensor values, joystick control)
  • processing video and sound libraries (brightness tracking, sound analysis)
  • object-oriented programming (perin noise, array list, mapping, sin(), frameCount, etc…..)

Final Remarks

I would like to give my sincerest gratitudes to Professor Moon for his guidance throughout the course and the inspirations he has given me. I am also grateful for Jiwon, Aven and all the IMA fellows for their generous support on each assignment. I have learned so much from this semester’s IMA courses, which really made my last semester at NYU Shanghai fulfilling and meaningful.

NOC Final Project – Dandelion

Nature of Code Final Project – Life of a Dandelion

Instructor: Moon

I simulated the life of a dandelion from growing to flying away for the final project. My initial idea was quite simple. Because my midterm project is about patterns, which are very organized and structured, for my final project I really want to explore more about things that are irregular and organic in nature, such as wind, plant and flower growth, etc. Dandelions came to my mind first, so I decided to do the animated dandelion in p5.js. As I searched more online and absorbed more inspiration, I thought it would be great if I can create something more than just dandelion seeds in nature, but something artistic to express a moment of transquil and beautiful feelings, so I decided to add the transformation of music notes and flying birds into my project.

I have faced a few challenges when doing the project, including improving system performance, controlling dandelion seed movement, making the transition more smoothly, and forming the clock shape.

My first step was to create the dandelion clock shape. I extended on my midterm project, drew seed shape, bird shape and five types of music notes using line and curve vertex, and then “attached” them to the particle system I created. Then I rotated the angle of individual seed shapes based on their relative position to the center position to make sure that they all point toward the center.

The second step was to create flow effect. I tried flow field we covered in class, but failed because it hurts system performance so much… I decided only to use simple forces and velocity control, added with sine and noise value. This part is really tricky, because I figured that very small changes of the parameter will create distinctive different in results.

The most difficult part of my project, I think it would be, how to make the transformation of seeds to music notes and birds more natural and organic. I tried the scale() function first, and opacity, but unfortunately failed… I did not really have a clear idea in terms of how to do the transition and in what way and range I want the particles to move. Finally thanks to Moon’s help, I was able to make it, shown below. Basically I created a lifespan check for each particle, if the life decreases to zero, the transform mode will be activated, and then the size of the particle will decrease to zero, and then it will change to the other shape and its size leaping to new values. It was a very good learning process for me, because the transformation has been divided clearly into different steps and I could really understand how they function and work together.

Screen Shot 2017-05-20 at 2.02.49 AM

Demo:

Code: [download]

Presentation slides:

https://docs.google.com/a/nyu.edu/presentation/d/1uxIYmfb3F4tK6x4-Psg-lOM4yYsy6xp25mx-4KLBI2Y/edit?usp=sharing

Final Remarks:

Nature of Code is one of my best decisions in my senior year. I learned about programming from zero and can’t believe what I have accomplished and gained through the help of fellows and classmates this semester. Definitely will explore more in my work after graduation!

Lab 12 Media Controller – Kathy Wang

Class: Interaction Lab 12 Media Controller
Date: May 5, 2017
Instructor: Moon
Teammate: Jinzhong Yu
Exercise:
This lab will demonstrate your understanding of Processing and Arduino. Now that you know how to control images, audio and basic video with Processing and how Arduino and Processing can control each other,  your assignment is to control the images and audio on a Processing sketch with a controller made with Arduino. You can use live images and live sound or recorded sound and pre-saved images as long as you credit your sources.
Process:
For this exercise, we initially wanted to use the vibration sensor value to control the pitch of a song. We first put together the arduino part, it worked well, according to the value we received from the serial monitor. For the processing part, we tried to use the minim library, but later we found that it was actually quite difficult to change the pitch of a sound file, so we decided to change the speed of the song using the rate() function in the sound library. We added the serial port part, and passed the value from arduino on to the rate(). When testing it, we found that the vibration sensor value was not very stable, which makes the classic music we uploaded sounds a little bit annoying… We switched the vibration sensor to the potentiometer. The sound effect became much better, and the testing results told us 0.8-1.5 is a hearable range for the Bach song. It was a pretty interesting experience listening to Bach played in different speeds.
Demo:
https://youtu.be/o1ieTi72VMs
Code:
https://drive.google.com/open?id=0B_DiZA6WY5uOTm1FcTJhSEtNRUk

NOC-Week11-Autonomous Agents-Kathy

For this week, we learned how to create a relatively sophisticated autonomous system through functions such as separate, cohesion, alignment. I adjusted the triangle shape a bit using the scale() function, interestingly, it can form both the floral shape and school of fish, so I thought it would be fun to combine them together.

Demo of my weekly assignment “rose and fish” and the codes. Click your mouse and the rose pattern will shift to fish and follow the mouse direction. I hope to improve the transition effect to make it prettier.

 

Lab 11 Drawing machine – Kathy Wang

Interaction lab – Lab 11

Instructor: Moon

Partners: Amy Mao, Andrew Huang, Jacob Park

Assignment:

We are supposed to create a drawing machine using an H-bridge to control stepper motors attached to mechanical arms.Form groups of two and assemble the circuit using SN75440NE IC and the pre-installed Arduino Stepper Library to control one stepper motor.

Work process:

We built the mechanical arm by following the circuit diagram. At first it did not work out, and we found out it was because the power was not well connected. But the arm moved too quick so we adjusted the value on Arduino. Here is the circuit we built, it looks a little bit messy though haha

IMG_0595

Another reason why it moved so fast was that we did not add potentiometer. It is actually quite dangerous, so we stopped the movement and tried to add the potentiometer. There are two sides of a potentiometer, the one with one leg should be connected to the pin, for the other side with two legs, one should be connected to ground and the other to the power. It worked when we rotate the small handle on top of the potentiometer.

IMG_0599

After successfully finishing the second part, it’s time to put two mechanical arms together! The instruction was quite simple, we just need to attach both to a laser-cutted board and use a rivet to connect them together.

Here is the final result! We are super excited to get our first drawing machine done.

 

 

Interaction Lab- reflection essay- Kathy

Kathy Wang

Interaction Lab (Moon)

Reflection Essay

April 26, 2017

My original idea was to create an environment-detecting robot that addresses energy wasting issues around campus and calls for environmental awareness. People tend to ignore energy wasting because it is something not seen, so our project aims to make it visible by visualizing real-time data. However, this project is beyond the class scope and unrealistic to achieve within limited time.

Here is a modified version of our project goal: we want to create a box-like robot/laptop companion that can visualize the surrounding environment of your laptop and interact with the user according to the time he/she spent on staring at the computer screen. As all kinds of electronic devices get popularized and become indispensable part of our daily life, more and more people suffer from internet addiction, along with symptoms such as sedentariness. People tend to forget time when they become too focused on playing online games, surfing the internet or coding. We want people to be aware of the environment they stay in through an interactive and artistic way.

The project consists of two parts: data visualization and time interaction. When the user opens the laptop and turn on the front-facing camera, the timer on processing starts, together with art patterns generated based on temperature and sound data collected from Arduino and visual data from camera. If the user sits in front of the screen for more than 1 hour, the box will send voice message to the user and ask him to leave the table for 5 minutes to get some rest or exercise. The voice message will continue if the user does not want to leave. If he comes back after 5 mins as desired, the timer will be reset to zero and start over.  

This final project incorporates my original ideation and extends on my midterm project with Esther. We will need: 1) 8cm * 6cm * 7 cm laser-cutted box; 2) Arduino, noise sensor, temperature sensor, laptop facial and sound recognition. Knowledge about computer vision, serial communication and computer generative art will be applied.

I have found many sound visualization and facial tracking samples online (attached below), those are great sources to learn from and ideally I want to combine the two together using the same “particle” class so that the visual effect on processing can be interactive by displaying both the facial recognition and the environment condition, for example: temperature – background color; sound – amplitude/frequency of particle movement; brightness – particle color. This is a challenging task, but I will try my best!

Reference:

Screen Shot 2017-04-26 at 2.35.42 PM (link)musicvisualizer3

sound

sound

bdd7a5014476489bd6884baee6d32ab9

NOC-Week10-Flowfield-Kathy

After recitation, I experimented on different parameters (amp, freq, maximum, framecount, etc.) to create digital fabrics of different textures. It was just amazing to see the accumulation of all the delicate threads in a self-directing manner.

Fabric

(Figure 1 is created with random width and height, moderate frequency and low alpha value. It takes some time to get such effect)

Screen Shot 2017-04-25 at 10.52.34 AM Screen Shot 2017-04-25 at 10.47.15 AM

(Figure 2 & 3 add mouseX value to amplitude, move mouse regularly between left and right side)

Screen Shot 2017-04-25 at 12.15.44 PM Screen Shot 2017-04-25 at 10.43.18 AM

(Figure 4 & 5 increase maxDesiredVelocity and steering force)Screen Shot 2017-04-25 at 10.39.33 AM Screen Shot 2017-04-25 at 10.37.25 AM

(Figure 6 & 7 change the starting position and frequency level)

Source code: Link

 

I am still working on the source code to create something new.

 

Inspiring video by Casey Reas: https://vimeo.com/22955812

Lab 10 – 3D Modeling

Date: April 21, 2017

Course: Interaction Lab

Instructor: Moon

Teammate: Esther


Assignment:

Using Tinkercad, design a 3D model of a wearable device, a game controller or a security camera that utilizes one of the following components or equipment: Accelerometer/ 4 Digit Display /Thumb Joystick/ Stepper Motor/ Logitech Webcam. Your model has to be 3D printable and must be able to hold or support one of the pieces listed above. In order to create a precise mount or connector, you must check the datasheet of the component or equipment to find the correct dimensions. Students can also take measurements using venier calipers if a data sheet with part dimensions is unavailable.

Exercise:

Considering what we are going to do for the final project, we chose Logitech Webcam as the main equipment to utilize. We want to create a simple yet good-looking cubic robot model that holds the webcam so that it could detect its environment.

My first step was to check out the webcam. http://ima.nyu.sh/equip/equip-items/logitech-hd-pro-webcam-02/ After reviewing the equip documentation blog, I did not find size information, so I used venier calipers to get the measurements. It is roughly 50mm in length, 28mm in height, and 35mm in width. We decided to make the box a 90*90*90mm cube.

I drew one big solid cube on screen first and then add another smaller cuboid to represent the position of the webcam. I managed to adjust the radius of the angle to make it look better. Sometimes the grouping process can be quite tricky, because from one angle you see the two items are perfectly aligned, but from another angle they are actually staggered, so it is important to check from different perspectives.

As I got more sense of the Tinkercad’s design interface, I began to think about how to make the cube hollowed out so that it functions as a real box for containing purpose. To make the inside of the box visible, I set the biggest cube as “hollow” so we can see better its internal structure. I duplicated the cube and translated it into a smaller one with the same center point, just like the figures shown below. I also added another hollow on the back of the box for USB exit.

Screen Shot 2017-04-21 at 12.29.29 PM

Screen Shot 2017-04-21 at 12.29.36 PM

Screen Shot 2017-04-21 at 12.29.42 PM

I felt it is quite hard to get accurate 3D modeling through Tinkercad, and wondered how people have designed complicated models, like those cartoon characters I saw last week during the Autodesk field trip. So I went online to search the 3D modeling of Totoro on Tinkercad blog, one of my favorite characters from Hayao Miyazaki’s animated films.

Screen Shot 2017-04-21 at 12.29.55 PM Screen Shot 2017-04-21 at 12.30.05 PM

It is indeed super..super… complicated…… The use of hollows in 3D modelling design is really amazing!

Equipment:

Screen Shot 2017-04-21 at 2.32.30 PM

 

 

NOC-week9-particle system + springs-Kathy

Code: link

I tried to combine particle system with inheritance to create the effect shown above. I had a few bugs when reorganising my codes, but it actually gave me better understandings of the code structure. I definitely want to apply the two to my final project.

I also practiced springs over the weekend. The live coding Moon did during the recitation reminds me of constellation in the universe, it would be a really cool idea if we can use the spring method to create a constellation map like below:

260f02f72927d115613cb57a2f441b97

I also found a very interesting planarity puzzle (https://www.jasondavies.com/planarity/) made by Jason Davies. He did it through D3, I tried to replicate it in p5.js but failed…. would be awesome if anyone knows how to construct it~ I am still working on the spring part.