Interaction Lab- reflection essay- Kathy

Kathy Wang

Interaction Lab (Moon)

Reflection Essay

April 26, 2017

My original idea was to create an environment-detecting robot that addresses energy wasting issues around campus and calls for environmental awareness. People tend to ignore energy wasting because it is something not seen, so our project aims to make it visible by visualizing real-time data. However, this project is beyond the class scope and unrealistic to achieve within limited time.

Here is a modified version of our project goal: we want to create a box-like robot/laptop companion that can visualize the surrounding environment of your laptop and interact with the user according to the time he/she spent on staring at the computer screen. As all kinds of electronic devices get popularized and become indispensable part of our daily life, more and more people suffer from internet addiction, along with symptoms such as sedentariness. People tend to forget time when they become too focused on playing online games, surfing the internet or coding. We want people to be aware of the environment they stay in through an interactive and artistic way.

The project consists of two parts: data visualization and time interaction. When the user opens the laptop and turn on the front-facing camera, the timer on processing starts, together with art patterns generated based on temperature and sound data collected from Arduino and visual data from camera. If the user sits in front of the screen for more than 1 hour, the box will send voice message to the user and ask him to leave the table for 5 minutes to get some rest or exercise. The voice message will continue if the user does not want to leave. If he comes back after 5 mins as desired, the timer will be reset to zero and start over.  

This final project incorporates my original ideation and extends on my midterm project with Esther. We will need: 1) 8cm * 6cm * 7 cm laser-cutted box; 2) Arduino, noise sensor, temperature sensor, laptop facial and sound recognition. Knowledge about computer vision, serial communication and computer generative art will be applied.

I have found many sound visualization and facial tracking samples online (attached below), those are great sources to learn from and ideally I want to combine the two together using the same “particle” class so that the visual effect on processing can be interactive by displaying both the facial recognition and the environment condition, for example: temperature – background color; sound – amplitude/frequency of particle movement; brightness – particle color. This is a challenging task, but I will try my best!

Reference:

Screen Shot 2017-04-26 at 2.35.42 PM (link)musicvisualizer3

sound

sound

bdd7a5014476489bd6884baee6d32ab9

NOC-Week10-Flowfield-Kathy

After recitation, I experimented on different parameters (amp, freq, maximum, framecount, etc.) to create digital fabrics of different textures. It was just amazing to see the accumulation of all the delicate threads in a self-directing manner.

Fabric

(Figure 1 is created with random width and height, moderate frequency and low alpha value. It takes some time to get such effect)

Screen Shot 2017-04-25 at 10.52.34 AM Screen Shot 2017-04-25 at 10.47.15 AM

(Figure 2 & 3 add mouseX value to amplitude, move mouse regularly between left and right side)

Screen Shot 2017-04-25 at 12.15.44 PM Screen Shot 2017-04-25 at 10.43.18 AM

(Figure 4 & 5 increase maxDesiredVelocity and steering force)Screen Shot 2017-04-25 at 10.39.33 AM Screen Shot 2017-04-25 at 10.37.25 AM

(Figure 6 & 7 change the starting position and frequency level)

Source code: Link

 

I am still working on the source code to create something new.

 

Inspiring video by Casey Reas: https://vimeo.com/22955812

Lab 10 – 3D Modeling

Date: April 21, 2017

Course: Interaction Lab

Instructor: Moon

Teammate: Esther


Assignment:

Using Tinkercad, design a 3D model of a wearable device, a game controller or a security camera that utilizes one of the following components or equipment: Accelerometer/ 4 Digit Display /Thumb Joystick/ Stepper Motor/ Logitech Webcam. Your model has to be 3D printable and must be able to hold or support one of the pieces listed above. In order to create a precise mount or connector, you must check the datasheet of the component or equipment to find the correct dimensions. Students can also take measurements using venier calipers if a data sheet with part dimensions is unavailable.

Exercise:

Considering what we are going to do for the final project, we chose Logitech Webcam as the main equipment to utilize. We want to create a simple yet good-looking cubic robot model that holds the webcam so that it could detect its environment.

My first step was to check out the webcam. http://ima.nyu.sh/equip/equip-items/logitech-hd-pro-webcam-02/ After reviewing the equip documentation blog, I did not find size information, so I used venier calipers to get the measurements. It is roughly 50mm in length, 28mm in height, and 35mm in width. We decided to make the box a 90*90*90mm cube.

I drew one big solid cube on screen first and then add another smaller cuboid to represent the position of the webcam. I managed to adjust the radius of the angle to make it look better. Sometimes the grouping process can be quite tricky, because from one angle you see the two items are perfectly aligned, but from another angle they are actually staggered, so it is important to check from different perspectives.

As I got more sense of the Tinkercad’s design interface, I began to think about how to make the cube hollowed out so that it functions as a real box for containing purpose. To make the inside of the box visible, I set the biggest cube as “hollow” so we can see better its internal structure. I duplicated the cube and translated it into a smaller one with the same center point, just like the figures shown below. I also added another hollow on the back of the box for USB exit.

Screen Shot 2017-04-21 at 12.29.29 PM

Screen Shot 2017-04-21 at 12.29.36 PM

Screen Shot 2017-04-21 at 12.29.42 PM

I felt it is quite hard to get accurate 3D modeling through Tinkercad, and wondered how people have designed complicated models, like those cartoon characters I saw last week during the Autodesk field trip. So I went online to search the 3D modeling of Totoro on Tinkercad blog, one of my favorite characters from Hayao Miyazaki’s animated films.

Screen Shot 2017-04-21 at 12.29.55 PM Screen Shot 2017-04-21 at 12.30.05 PM

It is indeed super..super… complicated…… The use of hollows in 3D modelling design is really amazing!

Equipment:

Screen Shot 2017-04-21 at 2.32.30 PM

 

 

NOC-week8-particle system + springs-Kathy

Code: link

I tried to combine particle system with inheritance to create the effect shown above. I had a few bugs when reorganising my codes, but it actually gave me better understandings of the code structure. I definitely want to apply the two to my final project.

I also practiced springs over the weekend. The live coding Moon did during the recitation reminds me of constellation in the universe, it would be a really cool idea if we can use the spring method to create a constellation map like below:

260f02f72927d115613cb57a2f441b97

I also found a very interesting planarity puzzle (https://www.jasondavies.com/planarity/) made by Jason Davies. He did it through D3, I tried to replicate it in p5.js but failed…. would be awesome if anyone knows how to construct it~ I am still working on the spring part.

 

 

 

 

Interaction Lab: Stamp + Autodesk Field Trip

Date: April 14, 2017

Instructor: Moon

Stamp Tutorial: https://vimeo.com/212333844

I haven’t used AI before, so it was exciting to try making the first pattern/stamp on my own. The tutorial video is really helpful in introducing some of the most efficient and common tips of design. I learned a lot about creating layers, transforming shapes, aligning text with the path, the use of guide lines. In particular, the part about changing text into outline is quite tricky. Here is a demonstration on how I create the stamp and what I finally made:

Screen Shot 2017-04-16 at 12.33.55 PMScreen Shot 2017-04-16 at 12.29.31 PM

Last Friday, we also did a great field trip to Autodesk, a multinational software company that specializes in 3D design, engineering and entertainment software. The demos from the company are quite impressive and informative, showing the enormous power of human creativity. As a business and finance major, most of the major classes I have taken emphasize on existing theories rather than creativity. So it feels great to see a completely new world where the wildest human imagination can be achieved through 3D printing and digital prototyping. It is also interesting to see the business model of the company shifting from licenses to subscription.

NOC-week7-waves-Kathy

This week, we developed more comprehensive understanding of oscillations. I experimented on the examples that professor demonstrates in class by changing several parameters and constructing more waves with the mouse position. Also, my midterm inspiration comes partially from the beauty of sin() and cos(). Attached are the codes and demo.

file: download

 

NOC Midterm Project – Kathy

Project: Patterns of Nature in p5.js

Date: Mar31, 2017

Instructor: Moon


I have always been inspired by all kinds of geometrical patterns existing in nature. For the Introduction to Studio Art course I am taking this semester, I actually used the traditional Chinese Gongbi painting technique to create an artwork, which is about celebrating the simplicity and complexity of patterns in nature.

Screen Shot 2017-03-28 at 10.20.50 PMScreen Shot 2017-03-28 at 10.20.12 PM

However, after learning oscillation in this class, I was thinking why can’t I do a project of similar concept in p5.js that allows people to freely explore geometry in just a few mouse clicks.

I did research online and found it totally feasible to create rhodonea curves in a two-dimensional coordinate system. I made some adjustments to the math formula, and here is the way I construct it:

k=n/d

r = s * cos(n / d * theta) + c

x = r * cos(theta) + r / p * cos(angle)

y = r * sin(theta) + r / p * sin(angle)

Imagine, hundreds of particles are floating weightlessly in a dark empty space. To a certain moment, they get attracted by some force and become part of a pattern shape. Play with different parameters to change the shape and influence particle movement. Press the space key to preview the complete pattern. If you want to observe frameRate, number of particles in the space and understand more about the curve formula, you can simply click the debugMode on the top right corner.

Screen Shot 2017-03-28 at 10.21.07 PMScreen Shot 2017-03-28 at 10.20.59 PM


I enjoyed doing this project so much, and it was exciting to see bugs fixed and features improved step by step, thanks to the support from Moon and Aven:) Below I documented a few challenges I have encountered when doing the project. Debugging is such a wonderful learning process.

  • Create gravity-free condition

My first step was to create a gravity-free physical environment for all the particles. I applied the vector and force structure we learned in class, but it is quite tricky to assign proper values to the wall and air resistance coefficients. What I did was simply testing the coefficients for many times to find the best combination.

  • Direct particles to the right position

I created the pattern display and floating particles separately, but had no clue how to put them together, and more critically, how to match each individual particle’s existing position with its target position of the pattern. The “count” and lerp() methods Moon taught me are very useful. So I let the program count for each draw loop, which records the number of target particles. Since the count value equals to the particle index value, we are able to tell each real particle in the array where the target position is. Next, the lerp() function is way more convenient than force in terms of directing particle movement.

  • Add and remove particles

When I adjusted the “t” and “step” parameters, i.e., how many times of circumference I repeat when drawing the pattern and the density of the pattern, the count value changes inevitably, which means I need to adjust the particle length so that not an extra particle stays on screen, plus every target position has a matching particle. Hence, I use if() statements to either push or pop particle when needed and it works smoothly.

  • Translate() function

I messed up with the translate function twice… and it generated so much confusion that i spent lots of time finding the logical flaw. The problem is, because I want new particles to be pushed from the center of the screen, I added the translate(width/2, height/2) to the draw loop, but I already used another translate(width/2, height/2) function when switching from the floating mode to the display mode, so there is an overlap section where I applied the translate function twice, which caused particles to stay at the top left corner when I shifted display to floating mode, and that at the bottom right corner when I shifted floating mode to display mode… I finally solved the problem by adjusting the location of translate function.

  • FrameRate issue

Due to the large quantity of particles I am generating, the frameRate stays around 20-35 when drawing sophisticated patterns. As mentioned before, I used pop() to delete one extra particle each frame instead of for() loop + splice(), because the former one is more intuitive and neater. The side effect it has is that it takes time to push and pop particles, so I need to wait a certain amount of time for the pattern to complete. It is not a big issue, but I would like to know if there are ways to improve.


Things to improve: 3D if possible?  fancier colour effect? frameRate

Reference:

  1. zipfile: download
  2. concept presentation: link
  3. demo: stage

Interaction Lab Midterm Project – Kathy

Interaction Lab Midterm Project Documentation

Date: Mar 30, 2017

Team: Kathy Wang, Esther Liu

Instructor: Moon

Keep Your Phone Away!

DESCRIPTION

Esther and I developed a phone locker that helps people stop smartphone addiction and stay focused while studying. As smartphones get popularised and become an indispensable part of our life, it is very tempting to check our cellphones frequently when we are supposed to concentrate on study and work. This phenomenon has raised our concern, so we were thinking… what about creating some kind of installation to keep our phone locked?

Screen Shot 2017-03-30 at 2.17.18 PM Screen Shot 2017-03-30 at 2.17.23 PM

We did some research online, and found out that a variety of phone applications have already been developed for similar purposes, but not many are user-friendly. In addition, making a phone app can be quite challenging based on our current knowledge, so we decided to make a physical device with timer on the computer screen.

After several discussions, we would like to have the following features:

  • Put your smartphone in the box, set up a time, close the box;
  • If you open the box before the appointed time, you will get pop-up surprises and the timer will be reset to zero;
  • If you don’t open the box until the time is up, you will receive congratulation signals;
  • The heart bar at the bottom of the screen gets filled as time goes by, so as to encourage you to stay away from the box when the timer is on.

DEMO OF THE PROJECT

You can add foamed plastics to the box before you set the time. If you do not obey the rule, the fan will start working and basically blow the foams everywhere, so you definitely do not want to break the rule, cuz it can end up being quite destructive… in a good way I guess! We actually tried that, and it worked perfectly.

TECHNICAL DEVELOPMENT

We encountered many obstacles in the process. The interface seems simple, we did not realize that even a small timer on screen needs quite sophisticated statements, until we actually put our hands to building it. For the timer, we want it to be controlled by the light sensor in the box so that when the box is open the timer will be reset to zero, otherwise it counts till the time reaches the target. At first, I tried the frameRate() function but it did not work, so I did some self study and used millis() instead. Then I added boolean value and if() statement to control the timer. For detailed reference, please see the code attached below.

Serial communication is another big challenge. We do not simply want the Arduino to send value to processing, but also the other way around, i.e., use processing to manage the fan and servo on Arduino based on different stages of timing:

  • box open
    • time has not reached target yet: fan on, servo move, timer reset to zero, the heart bar reset
    • time is up:buzzer activated, fan off, servo move, image pops up
    • after time reaches the target: fan off, servo off, image stays
  • box closed
    • time has not reached target yet: fan off, servo off, timer continues, the heart bar gets filled
    • time is up: buzzer activated, fan off, serve off, image shown, timer continues
    • after time reaches the target: buzzer stops, fan off, servo off, image stays, timer continues

The instruction gets really complex, so we ended up organizing them in the way above. For each condition, we send a value from processing to Arduino. After a few tests, it finally worked out!

FullSizeRender 2

FullSizeRender 3

FullSizeRender 4

FullSizeRender

 

 

LESSONS LEARNED

The learning process is rewarding. We experienced many failures, which also further our knowledge. A few takeaways I would like to share after the presentation:

  1. Concept development is crucial. I tend to worry a lot about programming/technical issues, while sort of put aside the conception part. We began to think about how to put the individual parts together after making the technology work, which turned out to be not very effective. It is always good to have a clear picture of the installation in advance, so we can save time on readjusting the position of the sensor and direction of the servo, etc.
  2.  I got much more familiar with the use of boolean variables, if() statements, millis(), frameRate issue, code structure, serial communications, buzzer pitch control, etc., and in particular, how to debug step by step on my own. I once messed up with my code on Arduino and did not know what to do, because everything seems to be connected with each other… Later I learned to use the Arduino serial monitor for debugging by testing each individual values that processing sends to the Arduino board. Finally I was able to find out where the problem is and fix it.
  3. The light sensor is very sensitive to its surroundings, so when we did the live demo in class, the sensor got kinda out of control because the classroom was much darker than where we tested the device, so we had to use flashlight. It is important to take such things into consideration during the presentation preparation.
  4. For further development, we would like to make the interface prettier and maybe think of a more interactive way of serial communication.

CODE

NOC-week6-Kathy

There are many ways to construct oscillation. After the recitation, I read through the readings and tried a different approach. It is very interesting to observe the change of wave movement by adjusting its amplitude and frequency. While the increase of amplitude does not necessarily make a big difference, when the frequency increases to a certain critical point, we feel that it is not a natural wave anymore.

Move your mouse around to see what will happen.