KI: Laser Ball: Final Project

Laser Ball was an idea conceived after studying projects with kinetic interfaces. One example is a game created by college students, where a camera recognizes body motion and lines drawn with chalk, and can interpret such objects as barriers. Balls are then projected onto a chalkboard, which can bounce off of the human silhouette barrier and chalk lines drawn by the user. It is a very cool conceptual game, as it introduces many physical, foreign objects as the focal point of interaction. In addition to this game, I drew inspiration from the game “Line Rider”, which has the same notion of building modular barriers which another player can then ride. Like the first project, this game is about drawing lines that act as barriers, which the second player can “ride” on to escalate to the next level. 

While these ideas were very cool, so too was this laser project by Graffiti Research Lab. In this colossal art installation, participants may use high powered lasers to draw on a building. Once again, a camera recognizes what the user draws, and a graffiti image is projected, in real time, onto the building. All of these project were cool, and inspired the idea of Laser Ball.

The initial idea behind Laser Ball is simple. Using a projector, I would project the image of balls drop from the ceiling onto an basket designated “Player 1” or “Player 2”. If a ball falls into the basket for Player 1, then Player 1 receives a point. The same is true for Player 2. Using lasers, both players would be able to draw boundaries that would be projected in real time, and affect how balls fall. Each boundary would cause any given fallen ball to bounce off of the boundary, which players could manipulate to their advantage. By using the laser to draw barriers, a player might guide falling balls into their basket.

The idea was simple enough, but difficult to implement. Although my first step was to discover how to draw barriers, Professor Moon’s advice guided me in the direction of getting the ball class to adhere to physics. For me, this was a learning process as I had only once worked with Processing’s PVector. Fortunately, there are many resources online that helped me get the basics of bouncing balls that reacted to one another. Once the balls bounced (against the sides and each other), it was a matter of creating the barriers. I had a bit of difficulty designing this, but with help I was able to create a barrier that the balls would respond to. I used this barrier to divide the bottom screen into two areas designated “Player 1” and “Player 2”. This way, falling balls would have to land in one section. Again, with help from the Professor, we managed to make this process much smoother, accurate, and efficient.

Here are some videos showing the ball’s physics development:

ball-1: ball-2 :ball-3:ball-4 Continue reading

.append(User) – Kinetic Interfaces Final Project – Brian Ho : bh1525

Concept:

.append(User) is an interactive installation aimed at enrapturing users by engaging all of their senses simultaneously. A controlled environment will be built for users to physically step into, their sense of smell will be engaged through perfume, audio with instructions will be played upon entering the room, projection mapped onto the fabric of the walls will the be the animations of flower pedals which will be controlled by the user’s gestures and Kinect.

Construction:

I forgot to take pictures of the original room but it was built out of 10 rolls of masking tape and 8 pvc pipes. The pvc pipes that I purchased, unfortunately, weren’t the sturdy kind and were extremely wobbly. Despite the 10 rolls of masking tape, not only did it not actually stand up against the wall, it didn’t even stand after a couple hours. I wasn’t able to use the woodshop since I’m in the interactive installations, but God bless Jiwon’s heart, with her help we were able to easily create a wooden frame instead. Using old white fabric from a backdrop that I used to use for photography, we created walls and nailed it all together. Pictured below is the “house” we built.

25

The setup we had above for the year end show.

Using kinect and skeletal tracking I created commands based off of the position of their hands, relative to their head.

Freeze: If the kinect detects the user’s left hand above their head, all the flowers will freeze.

Gather: If the kinect detects the user’s right hand above their head all the flowers will group at the x, y coordinates of the right hand.

Raise: If the kinect detects both of the user’s hands above their head, all the flower particles will rise instead of falling.

 

Obstacles:

.appendUser wasn’t really designed for an environment like the year-end show, as so much outside stimulus take away from the engagement users have upon stepping into the room thus written instructions had to be posted. Even then, I found it best for me to be around to explain to users how to interact with the installation. Also, I didn’t realize until the show, but the Kinect seems to look for 4 limbs and a head to begin skeletal tracking, I found that a lot of people who wore long skirts or dresses had an extremely difficult time getting the installation to work as the Kinect wouldn’t track them at all.

Below is the code that I used for the falling particle animations:

 

Final Presentation Pitch – Kinetic Interfaces : bh1525 – Prof. Moon

Concept:

Creating a controlled environment in order to stimulate all the senses simultaneously, building upon my previous mid-term project I wanted to keep the aesthetics and the overall controls of the project but fine-tune and make added features to it. For one, instead of utilizing leap motion which forces users to stand over a specific sensor, and be within a couple feet of a computer, I want to use Kinect in order to skeletal track. I want to build a room that can be projected into that way I have much more control over the environment instead of mapping onto a corner of a room.

 

Presentation Slide:

https://docs.google.com/presentation/d/1NNpykeXV5uTkQGxegJl0L7FXTSga733Ti92eU6OzeB0/edit?usp=sharing

Kinetic Interfaces: Maggie Walsh Final Proposal

screen-shot-2016-12-18-at-12-33-40-pm

screen-shot-2016-12-18-at-12-33-53-pm

screen-shot-2016-12-18-at-12-34-00-pm

screen-shot-2016-12-18-at-12-34-08-pm

Luis showed me this video. It is about a man who has ALS. He was a famous graffiti artist, but because of the disease he could no longer draw. They produced this device to allow him to draw with his eyes! I thought this was a cool inspiration.

I was looking up eye tracking on youtube, and this result led me to quite a bit of information. Apparently this man had done a lot of research and written a few papers on his method of eye tracking, and I tried to read them, but it got a bit too complicated, in relation to the math he used. Maybe if I had more time…and math knowledge, haha, I could look more into his findings.

screen-shot-2016-12-18-at-12-34-15-pm

http://www.instructables.com/id/X-Y-Plotter/

This is the link to the instructables page that I was going to use to help me build my x,y plotter. I have a month over winter break, and access to a local maker space, so I was thinking of perhaps trying to create something while I am home, since I did not have time during my project, but we will see.

screen-shot-2016-12-18-at-12-34-23-pm

I thought that this installation would not have been as difficult as it proved to be, haha. So, I’ll talk about it as if I wasn’t writing this post having the knowledge of what happened. I wanted to get dry-erase ink, and soak felt with it, and then attach that felt to a magnet. The magnet would then be attracted to a magnet on the other side of an acrylic board. The magnet behind the board would be attached at the intersection of the x,y,plotter. So that it would move with the movement of the data points, correspondingly moving the ink-soaked felt. Then, it would draw! 🙂

screen-shot-2016-12-18-at-12-34-33-pm

Underestimated the timeline, haha. Definitely needed more time to work on the sketch. Plus, I went to Shenzhen, which I was planning for, definitely, but regardless, it still took an entire weekend out of my work.

screen-shot-2016-12-18-at-12-34-40-pm

I definitely knew what my issues were going to be, and I have to say these were among the most difficult. Data Stability was a rough one to deal with because of OpenCV’s Eye and Face detection. And frame rate took some create maneuvering.

screen-shot-2016-12-18-at-12-34-47-pm

Trump’s Wall: Concept Presentation and Final Project Documentation

Final Project Video

Final Code:

Conceptual Presentation:

 

Idea Schematic

schematic

 

Process: 

Coding- For the coding aspect, this project seemed pretty straightforward at the beginning as I would just be mixing what we had learned about in class about tracking the closest point with a Kinect. And in all honesty, the coding of this project provided the least troubles. There was a bit of trouble with Processing crashing when equipped with both a kinect and a webcam, but this was able to be resolved by choosing a lesser quality camera function. The coding for this project followed the timeline and was finished pretty early on in the process.

Fabricating- To fabricate the physical components of the project, I needed to buy white latex and create a wooden structure to stretch the fabric across. The fabric was white spandex purchased from the Shanghai Fabric Market. To make sure that it was going to be rear-projectable I decided to bring with me a flashlight to do light testing on the fabric. As for the wooden structure, after I got permission from Matt to use the woodshop for this project, I proceeded to conceptualize how I might best build the frame. Originally I thought that I would follow the following design I had found online:

wall

But as I started considering how the structure might work the program, I decided that I couldn’t have the bases projecting out into the area where the kinect would be the way the program was scripted because it might interfere with the sensing data. To prevent this from happening, I decided to build the wall out so that the bases could still be added, but in a way that wouldn’t be in the sensing data. Special thanks to Jiwon for helping me with the wall! After the wall was constructed, I then stretched the spandex in a way very similar to making a canvas and nailed the fabric in place.

Finishing Touches- For the final touches, it came to mapping the project on the structure I had built. While this seemed to be the final stretch of the project and I assumed that this would be the easiest step, it actually proved to be the most difficult because I hadn’t anticipated so many issues with using MadMapper. Having used MadMapper before, I assumed that it would be easy to use, but because of some interaction between my computer and the program I was never able to figure out how to use MadMapper effectively, since the program would automatically shrink my sketch into the bottom left corner of the projection. After using new software to map the sketch, it just became a matter of placing the kinect and projector in a way where the user could have the optimal effect.

Reflection- Having watched the installation during the IMA show, there are 2 main takeaways I would have where I might want to improve the project in the future. 1) Increased Participation. During the show, I began to realize that the size of the installation begged multiple people to interact with it, which is something I hadn’t anticipated. Having seen this, I think using thresholds instead of a closest point calculation might allow more people to interact with the piece at once. 2) Increased stability. While the fabric was surprisingly  durable, I would have liked to added more weight to the wooden structure, because there were a few times where people gave the wall a good hit and it almost fell over.

With this all said, I am very proud of my final project. Thank you to everyone who helped me along the way! This especially includes my Kinetic Interfaces Class and Professor Moon! I’m sincerely going to miss taking this class next semester!

“好的好的” (Hao de, Hao de)

Zeyao KI final project: Leave Your Comfort Zone

Leave Your Comfort Zone

“Leave Your Comfort Zone” is a public installation which can encourage people to step out their comfort zone and try to say “hi” to strangers.

Description

By using Kinect, Projector, and processing, I made a public installation named “Leave Your Comfort Zone”. This project is to encourage people to leave their comfort zone, and show them how to make friends with their classmates. There is a comfort zone area which people will stand in first, then the instruction video will tell people to encourage people to take a step out their comfort zone. Once they stepped out, another video will ask the user to try waving their hands and saying “hi” to the photo frames on the wall. Users’ classmates will say hi back to the users and say something like “would you like to be my friends?”. In this way, users will learn the importance of stepping out the comfort zone and making friends.

 

Demo of Project

 

 

Conceptual Development

  • Inspirations

Since we need to use what we learned in class, I did some research on vimeo and tried to find some inspirations on my final project. The key word that I used is “kinect”, “interactive”, “installation”. Then I found a project called Conductive Orchestra. Here’s the link: https://vimeo.com/40505337 . This is a project that you can interactive with the board of different shapes which are hung on the ceiling. Once you reached your hand to the board, music will start and cool videos will be presented on the board. So when all the board are triggered, it will look like a orchestra.

Then I thought I could make a world map then hanged it on the ceiling. When people waved hands to different continents, people who are from that continent will show up. However, this idea was not that clear enough. The world map and the video will make people confused. Then I realized I wanted to let people to use my project to make friends. I told Tyler about this idea and he came up with the idea of comfort zone. Because the reason that people don’t want to make friends is that they want to stay inside their comfort zone. So the comfort zone ideas was added to my project. After having the general idea, I started to think how to make my project more clear. Since the world map was really distracted, I thought photo frames will be better to replace the world map. Finally, I came up with the idea that when people stand out their comfort zone, and wave hands to the photo frame,  their classmates will “jump out” from the photo frame and say “hi” back to the user.

  • Motivation

I observed there were a lot of students who study in NYU Shanghai had a “lonely soul”. That means they always do things alone. Seems like they don’t really have friend in our school. Realizing this problem, I wanted to make an installation that can encourage these lonely souls to leave their comfort zone and let them realize it is actually not hard to make friends in our school. Because NYU Shanghai is such a small community, and our classmates are all really friendly.

  • Some notes

img_5509

img_5510

Technical Development

Design: 

The design of this project is mainly hand drawing. At the beginning, I wanted to use wood or the paper which looks like wood to be the photo frame. After laser cutting the paper, I tested the effect of projecting video on the paper. Then I found the video looked really dark on the wood paper even I turned off all the lights. So I had two thoughts about the design. The first one is that I draw the photo frame on the white board, however i realized it was really hard to draw photo frames on the board. So I drew all the photo frames shape on the white paper. I like the anesthetic of being looked like hand made. It will make users feel more closer to the project, especially a project about making friends. The hand-drawing photo frame looks really friendly to the user. Also, inspired by one picture of hand-drawing photo frame, I made all the frames looks like hanging on the clothesline. Based on the feedback, people liked how my project looks like.

img_5502

Implementation:

The material that I used on this project are: Kinect, projector, mad mapper, processing, and MacBook Air. Kinect is the main tech that I used. Basically, I made various “invisible” buttons on my sketch. Since kinect can tell you how many pixels on one area. Kinect on mac will mainly recognize the depth of the area that it sensed. I set a number of the pixel in the button area. Once your hand is in the button area, the pixel will be greater than the setting that I set. The process that I have for this project is that I used keypress to control 8 different video first, then added the opening video and instruction video. I put the video demo sketch into mad mapper sketch so that I could test the effect of mad mapper. Then based on the sample code that moon gave me, I created button to replace the keypress. After creating one button, I created the button array. When I actually started set the installation, based on the position of each photo frame, I adjusted the position of each button again and again. Then it finally matched my hand position. Also, before I used table and tape to be the support of my projector, then Moon told me the tripod could be the support of projector, so the space of my installation will be bigger and people won’t hide the screen.

Lessons Learned

User Test:

There are some feed backs on my project when I presented on my presentation and ima final show. First, more function can be developed on my project. Since the purpose of my public installation is to ask people to say hi. The interaction between users and the people on the photo frame can be increased. For example, once they say hi back, you can have more interaction with them and then can say more things or even give their qr codes! Secondly, it will be better if I showed user itself on the screen so that the user will know where they are and where their hands could be at. This is not hard to achieve. I can add the user’s pointcloud on it. Thirdly I should tell the user how to wave hands. It sounds a little bit stupid but actually when they experience my project, they have different ways to say hi 😂. If I can tell the user more specifically the result will be better. At last, the size of the installation could be bigger because although we think the distance of each photo frame is wide enough, the distance of each button is actually really small. So the user is kinda easy to trigger other photo frames. Generally people really like my project, when the people “jumped out” and say hi to them, they were surprised and excited ! Also, I found the potential market of my project, the administration and student life office. They seemed like having a strong interest on it because the installation can kinda promote their idea.

 

KI – Final Project Documentation, ZZ

Title:

A Song Without Words

Elevator Pitch:

This is an interactive dance piece using Kinect with projection of visuals created according to the body movement sensed by Kinect.

Description:

This dance piece is a visual response to the song, 无言歌, by Sandy Lam. My friend, Ann Yang, a talented dancer, choreographed this dance under my general guideline. There are four stages in the dance performance–birth, accumulation, explosion, and decay. Each stage is represented by different visual effects such as point cloud, particle explosion system, particle ring, and particles collapsing. The main point cloud figure is the live visual resemblance of the dancer, and the dancer also react to the visuals by modifying her dance routines based on what the visual is created by her previous movements and positions.

Conceptual Development:

I have always wanted to work on a dance/performance project, and after watching some of the kinetic dance pieces videos shared by Prof. Moon, I decided to seize the chance to do the dance collaboration with Ann. We learned about point cloud in class, and I like that it is simple, easy to modify, visually compelling, so I figured this is something I can include in the dance performance. Then I talked to Ann about the general idea, and she immediately said yes. Then I recommended a couple songs to her–some songs that impress me by generating something abstract in my mind when I listen to them. We agreed on choosing the song by Sandy Lam. Then I started thinking of more detailed visuals. The song has a strong religious vibe and the lyrics list all these holy terms so it is hard to understand and different than most of the ballads Sandy is famous for. The song begins with sacred choral portion with a smooth transition to the verse where she is singing the names of holy figures in different religions. After that, the chorus is literally a human crying for redemption and seeking for help to save the sin in his own thoughts. So the story setting for the dance I told Ann is that the idea of confession occurs in a person’s mind, then this person starts his redemption, then he starts struggling with redemption, feeling strangled and getting truly lost on the way of coming back to the origin state, and at the end he is either extremely tired, almost consumes his energy and fails to find a way back, or his life is taken. It also doesn’t have to be interpreted this way. Since the dance routine is the production of the endless reaction to the visuals upon the reaction to the dance routine, I would also like to show the karma or life cycle, and human being’s different statues in the overall looping process including struggling, failing, conquering.

Technical Development:

Thanks to Prof. Moon for letting me use his Mac Pro, keyboard, track pad, his screen, and the speaker. The visuals are created in a processing sketch with a control panel. Visuals are sent to Arena with syphon, but the control panel is not sent to the Arena so that I can do live control including particle speed, color gradience, transition over the visuals. A Kinect is used to sense the depth and capture the shape of the object within the sensing range. The explosion and particles is attracted by the closest point within the sensing area. I didn’t meet a lot of technical difficulties during the overall process. But again, I have to thank Prof. Moon for making this project happen. I also need to thank Jingyi Sun and all my Kinect Interfaces classmates for tolerating or contributing to this project.

 

img_6439

img_6444

 

Kinetic Interfaces: Eye Draw, Maggie Walsh’s Final Documentation


screen-shot-2016-12-14-at-3-13-58-pm
screen-shot-2016-12-14-at-3-16-35-pm

 

 

 

 

My final for Kinetic Interfaces is an Interactive Installation I call “Eye Draw.” It uses eye tracking to allow a user to move objects in a processing sketch. Here is a small short video of the final product.

VIDEO 

Materials: 

My only materials for this project was a MacBook Air. My sketch can run well on a device with a smaller amount of processing power, due to the way I resized the images that were running through the OpenCV. There needs to be a webcam of sorts in order to detect the user’s eye.

Ideation: 

I was inspired by the fish tank video I showed in the Computer Vision class. I admired the idea of extending the senses or capabilities of a person or thing. That is why I wanted to do something like this. It extends the abilities of the eyes from seeing, to actually performing.

Challenges:

I faced many challenges during this project, I will explain them in sections:

 

  1. Framerate

I found in the beginning that is was very difficult to keep the framerate of this sketch high. The way I combated that in the beginning was to resize the images that the OpenCV was analyzing for the face and eye detection. I really wish my computer had enough processing power to be able to do the analysis of the raw image, because as of now, the OpenCV analysis is not very accurate, because of the small size of the image. You need to be in just the right position for the eyes to be detected. But, if the face is not entirely in the screen then it won’t be recognized as a face. Most people’s initial reaction is to put their face very close to the screen so that they see their eyes, but the eyes aren’t recognized if the face is not recognized because I am using the face to define which eyes are the proper eyes to detect.

 

  1. Different Types of Eyes

I thought that testing on my eyes would be okay because I have quite light eyes, but when people with even lighter eyes came to test my project it did not work on them (Clay Shirky and Owen Roberts). Therefore I think I ned to adjust the threshold a bit more and find the perfect lighting for this to work, (I will talk about this more in the lighting section).

Also, people with smaller eyes, sometimes were not recognized super well, because the way in which they moved their eyes sometimes caused their eyes to close a bit (Especially when moving down.) If the eyes are closed they are not recognized.

 

  1. Lighting

Lighting was an issue because it implies there needs to be a constant involved in the environment in which this project is used. I really don’t like that, because I wish to use this in many environments. Unfortunately, when using color tracking lighting seems to be one of the most important factors, as I learned in my midterm.Instead of color tracking I could potentially change to blob tracking, but doing that on such a small openCV image might not be the most effective idea for both accuracy and framerate, for reasons I have discussed earlier

 

  1. Data Stability

Data stability was another one of the largest problems I have had during this entire project. Luckily, professor Moon helped me realize that I was drawing things inside of the for loop that made things not work very well. That is something I need to learn a bit more about,  For  Loops. Why things work sometimes and not others. Luckily, by the end, everything became more stable due to mapping, finding proper averages and Lerp. Thank you Moon! 🙂

 

Future

In the future I would like to try to calculate “gaze-tracking” from this sketch. Therefore, people would not have to look away from the sketch to play the game. (Another large issue in the desgin). Also I would like to make a device like glasses that creates the ideal environment for eye tracking without having the person have to be in the perfect environment. But, at the same time, I like the idea of people not wearing anything at all and still being able to control things.

Antonius gave me an interesting idea, because right now in my project you are just looking at plain spae when moving your eyes. There is no target in the real world. I think it would be so cool to create an environment where you could use IoT things, like the WioLink to make things have a response in the real world to the movement of your eye. This was sort of what I wanted to do all along, but I think Antonius helped me look at it in a new perspective. I have some cool ideas for this project.

You can take a look at my code below:

SERVER:

CLIENT: