Interaction Lab (Moon)
April 26, 2017
My original idea was to create an environment-detecting robot that addresses energy wasting issues around campus and calls for environmental awareness. People tend to ignore energy wasting because it is something not seen, so our project aims to make it visible by visualizing real-time data. However, this project is beyond the class scope and unrealistic to achieve within limited time.
Here is a modified version of our project goal: we want to create a box-like robot/laptop companion that can visualize the surrounding environment of your laptop and interact with the user according to the time he/she spent on staring at the computer screen. As all kinds of electronic devices get popularized and become indispensable part of our daily life, more and more people suffer from internet addiction, along with symptoms such as sedentariness. People tend to forget time when they become too focused on playing online games, surfing the internet or coding. We want people to be aware of the environment they stay in through an interactive and artistic way.
The project consists of two parts: data visualization and time interaction. When the user opens the laptop and turn on the front-facing camera, the timer on processing starts, together with art patterns generated based on temperature and sound data collected from Arduino and visual data from camera. If the user sits in front of the screen for more than 1 hour, the box will send voice message to the user and ask him to leave the table for 5 minutes to get some rest or exercise. The voice message will continue if the user does not want to leave. If he comes back after 5 mins as desired, the timer will be reset to zero and start over.
This final project incorporates my original ideation and extends on my midterm project with Esther. We will need: 1) 8cm * 6cm * 7 cm laser-cutted box; 2) Arduino, noise sensor, temperature sensor, laptop facial and sound recognition. Knowledge about computer vision, serial communication and computer generative art will be applied.
I have found many sound visualization and facial tracking samples online (attached below), those are great sources to learn from and ideally I want to combine the two together using the same “particle” class so that the visual effect on processing can be interactive by displaying both the facial recognition and the environment condition, for example: temperature – background color; sound – amplitude/frequency of particle movement; brightness – particle color. This is a challenging task, but I will try my best!