Kinetic Interfaces: Midterm – Collaborative Musical System (Peter)

Project Idea

The idea of this project is to create a collaborative music system with kinetic interfaces. The system consists of a conductor, several instruments, and leap motions as inputs, which are connected over the internet. To be more specific, the conductor monitors the music data from the instrument and controls the musical features of the instruments. For each musical instrument, the player controls it on a digital instrumental interface with a leap motion. In general, the project aims at creating a system by which users can create music cooperatively and explores how collaborative computing can do for musical or even artistic creation.

Network

The conductor and the virtual instruments are connected over the internet. To achieve this, we utilized a processing library called oscP5. By experimenting multiple example connection methods provided by the library, we decided that the oscP5’s TCP connection is the most suitable for this project. Using the TCP connection methods, we are able to make the server and the clients to respond in real time. More specifically, the clients send pings repeatedly to the server (every 0.5 seconds), and the server acknowledges the pings and responses with the data of musical features to the clients. Meanwhile, this way of connection also allows the system to identify the existence of the clients. If a node stopped to ping the server for a certain long enough time, the server would delete it from its list, and modify the interface accordingly as well.

Instrument (Client)

The instruments have two tasks. First, it provides the players with a digital interface of the instruments by which the players can play the instruments with a leap motion. Second, the players send pings to the server. The pings include the information of the name of the instruments such that the server can identify whether this is a new instrument that just came online, or it is an old one. This means that as long as there is a server online, the clients are free to come and go, thus enhancing the scalability of the system.

Conductor (Server)

For now, the conductor has three functions. Firstly, it controls the instruments that are connected to them by sending them data of musical features. Currently, the musical features only contain the volume, but we are going to explore more on additional features that make sense to play with. Secondly, the conductor is aware of the size and the identity of the whole network. This has two advantages. First, we will be able to design each difference instrument’s feature and send them to the correct instrument. Second, the conductor’s interface can thus be dynamic. For instance, when there are two instruments connected, the interface will be split into two, and more there are more instruments connected. And if one or more instrument leaves the system, the interface can be changed accordingly.

Improvements

This project is meant to be developed for the whole semester, and this midterm project is the first phase of it. For the rest of the semester, we are expecting the following improvements:

  1.    Create additional interfaces for additional musical instruments.
  2. Explore the musical features that make sense to alter into more detail.
  3. Consider using Microsoft Kinect as the user input device for the conductor.
  4. Improve and beautify the interfaces for both the instruments and the conductor.
  5. We might try to extend the system out of LAN to achieve better flexibility and scalability.

Video Demo

Kinetic Interface – Week 6 Leap motion (Peter)

Intro

In this week’s assignment, I am practicing basic usage of Leap Motion.

Idea

So what I created for this assignment is basically a drawing program. It has two more functions than the one we went through in class: color changing and clearing. Specifically, other than just using your index finger to draw, you are also able to use the pinch gesture to pick colors on an HSB color bar at the bottom. Meanwhile, there is a button at the top-left corner, and by pressing it you can clear whatever you have drawn.

Furthermore, for the drawing part, I referred to one of the sample code that allows me to create the effect of color drops that splash around the main stroke.

Code Analysis

For the implementation, I split my program into two parts: the drawing part and the color changing part.

  • Drawing part: for the drawing part, it is quite straightforward, I just use the trick of the previous and current coordinates to track the movement of my finger and then use the line function to draw. Meanwhile, I explored the splat effect created in one of the sample code. While the size of the splash-around color drops is random, whether there should be color drops is determined by how fast the user is moving his or her finger.
  • Color changing part: for this part, it is more complex. I created a bar of HSB color and tried to track pinch gesture to allow color picking. However, I then found that two things have to be taken into account. First, how to map the movement of pinch to the bar. Second, how to avoid computer to accidentally detect a pinch gesture while drawing. The first problem is relatively easier to solve, I simply take the distance from the original pinch point each time to the next point, and then map it to the length of the bar. The second problem is, however, more troublesome. Eventually, I have to separate the drawing function and the color picking function apart. Specifically, you have to move your finger to the bottom of the screen to pick a color, whereas if you are outside of the region, you can only draw.

Sample Result

Here is a sample result of playing with the program for a while:

Kinetic Interfaces: Weekly Assignment 3 – Pixel Manipulation and WebCam (Peter)

Intro

In this week’s assignment, we are practicing pixel manipulation and the use of a webcam.

Idea

So what I created for this assignment is basically a photo capturer with interactive features. When the program launches, it will capture a picture using the webcam. However, at this moment, the picture is hidden. The user has to use a beam bright light (RGB (255, 255, 255)) (such as a phone’s flashlight) to reveal the picture part by part. In the meantime, the picture is recreated using ellipses in a partly random color. Every 15 seconds, the color changes, and every 120 seconds, the webCam will recapture.

Code Analysis

The code is relatively simple for this assignment. Basically, I am combining pixel manipulation, specifically image recreation, with the use of a webcam. The code is short and sequential and can be broken down into the following parts:

  • When the program launches, it uses the webcam to capture a picture and save it.
  • Then it continues to read from the webcam and try to find whether there is any completely white spot (your webcam). And if there is any, then reveal the corresponding part of the picture.
  • To reveal the corresponding part of the picture, I used the same technique of using ellipses to recreate a picture that we are taught during the class.
  • The color of the ellipse is a scale of red, green, or blue of a random density. Every 15 seconds, we switch to the next random color. The scale is defined by the grey scale of the pixels on the original picture.
  • Additionally, the picture is mirrored for a better user experience.

Sample Result

Here is a sample result of playing with the program for a while:

Kinetic Interfaces: Weekly Assignment 2 – OOP and Buttons (Peter)

Intro

In this week’s assignment, we are practicing Object-Oriented Programming in Processing and also the use of buttons.

Idea

So what I created for this assignment is a game made by bouncing balls. Basically, the idea is that you need to control a board to catch and bounce back the balls. The number of balls would increase as the player enters a higher level. If any ball falls out of sight, then the game ends, and a final score would be calculated based on how many times a player bounced balls up.

Code Analysis

For this assignment, I created three classes, which are “Ball”, “Panel”, and “Start”.

  • “Start”: This is basically a standard responsive button. When the player moves a mouse on it, it would change color. If the player clicks on the button, the state of the code changes, namely, from “start” to “in-game”, or from “game-over” to “in-game”.
  • “Panel”: This is the class corresponding to the board that catches the balls. It has two main methods. The first one is “move”, which is quite self-explanatory. The second one is “collision”, it basically senses whether there is a ball falling on the board, if true, then bounce the ball back, and add up 1 to the total score.
  • “Ball”: This is the most complicated class. In general, the balls are responsive to all kinds of collisions, such as the collisions with the boundaries, the collisions with other balls, and the collision with the board. The first two are handled in this class. To be specific, the methods recalculates the speed of a ball once a collision happens. Meanwhile, it is worthy to mention that since I am not familiar with the use of vectors in processing, so the way the balls bounce does not necessarily conform to the physics laws. Furthermore, note that each time when a ball hits the board, it would shrink by size, and when the size is smaller than a certain value, it would fade out and be deleted from the ArrayList that contains the balls.

Video

Kinetic Interfaces: Weekly Assignment 2 – Functions and transformations (Peter)

Intro

In this week’s assignment, we are practicing transformation functions and user-defined functions, with consideration of modularity and reusability.

Idea

So what I created is a clock. Other than reading the local time and presenting it, it has two extra functionalities: speed up and changing background color according to the time. The basic idea is that when it is in the middle of a day, the color will be sky blue, and then it would go darker and darker as the time gets later and later. The functionality of speeding up the clock was just meant to be a testing tool, but later it seems that it adds another layer of interactivity.

Code

I created five user-defined functions, which are listed as below.

drawClockFrame(): This is a simple function that draws the clock frame.

drawClockHand(float w, float h, color c, float startTime, int timeScale, float speedScale): This function is responsible for the clock hands to move as the time goes by. It takes several arguments to determine where the hand currently is, the color, and what hand (min, sec, hour) the hand is, and how fast it should move. Note that it is reusable, all the three hands are implemented with this one single function and are able to represent the actual time.

backGroundColor(float seconds): This function determines the current background color. It takes a time argument, with the range from 0 to 86400, which tell what it is now in a day, then the function changes the background color according to the time.

speedControl(): This function implements the time speed control bar. For test and playability purpose, the speed of time is controllable, with the range from 1 time of actual speed to 2000 times. Note that it just scales up the radians to make this possible, therefore when you move the red ellipse back to the original point, it would tell the actual time again.

textClock(): This function shows the time with text, and also prints out the current speed of time.

Video Demo

Please click this link to access the video.

Clock

Kinetic Interfaces: Super Mario Bros in AR (Peter)

“Super Mario Bros” in AR is a game made by Arbishek Singh. In general, it recreated the first level of the famous Super Mario game using AR. The environment of the game is mapped into the actual world and presented to the user’s eyes by the headset. The headset can then catches the user’s body movement, voice control, hand gesture such that the player can actually play as Mario himself and interact with the virtual world mapped out for him or her.

The technology used is very advanced. In short, the whole game is based on Microsoft’s new product, Hololens, which is basically the next generation of Microsoft Kinect. Hololens is a high-tech headset that incorporates almost all the kinetic technologies. For example, the Inertial Management Unit, a combination of an Accelerometer, a gyroscope, and a magnetometer, allows the headset to sense the user’s movement, such as walking and jumping. Meanwhile, the headset is also equipped with a set of Environment Understanding Sensors and several depth cameras such that it can map out a virtual environment to the reality as well sensing the user’s interaction with the environment such as touching an object or making a hand gesture. What’s more, the headset also has speakers and microphones, which enables spatial sound effect and voice control.

What I found the most interesting about the project is that it is taking the next step in terms of AR games. Unlike the previous AR games such as Pokemon Go, this game does not require a screen interface anymore. With the Hololens, the game literally brings the game to the real world and thus providing a more immersive environment.

Find my slides for the presentation here:

https://docs.google.com/presentation/d/1m-LJ0O3K5mgy6SiNcWg7HmqVoFnRdg16c_ChCw-DPwY/edit?usp=sharing

Network Everything Final project – Missing stuff finder, by Diana and Peter

Project Name: Missing Stuff Finder

Group member: Diana, Peter

Instructor: Scott

IMG_2678

Note: The blue pencil case is the wireless stuff tracker, and the mysterious animal is only for being cute and fun

Inspiration:

When we lose our phone, we usually make a phone call to find our phone. But when we can’t find our other stuff like a charger or an Arduino kit, we have no way to find where they are. Therefore, we decided to create something which functions similarly to a phone call. In this way, people can save much time when they need to find their lost stuff.

Design:

We designed that our system aims at dealing with the problems under the circumstances where someone misplaces their properties, and thus fails to find it quickly. This can be very annoying when someone is in a hurry, such as leaving for work or school in the morning. So, our system has the following parts:

Website: We create a website with several buttons on the page. People need to login their name first, then they can press the button on the page to make the hardware receivers play music.

Xbees: The Xbee hub will send commands to receivers when a button is pressed. The Xbee receivers will receive signals from the Xbee hub, and then ask the buzzer to play music, notifying where they are. People can stop the music by pressing the real button on the Xbee receivers. If they still cannot find the stuff even it is buzzing, they can also press the button on the web page to stop all the music.

Major Devices Utilized:

Arduino UNO: supports Xbees and Ethernet shields

Ethernet shields: holding websites

Xbees: including Xbee shields and antenna, consisting of a hub and three receivers

Communicating Protocols Covered:

Serial communication: sending commands from ethernet shield(server) to the laptop which runs a twitter API written in Python.

I2C: Supports the communication between the ethernet shield(server) and the Xbee hub.

Xbees (wireless serial communication): one to many communication between Xbee hub and all the receivers.

SPI: Supports the communication between the microcontroller on the Arduino UNO and the Ethernet chip.

Components: Functions, Problems, and Solutions.

Ethernet Shield (Serving website, sending commands to hub via I2C, send to Python script to track each user’s lost and found info)

We used the Ethernet shield to serve a website that controls the whole system. The ethernet shield communicates which both our laptop and the hub Xbee. In terms of the communication between the shield and the Xbee, the Ethernet shield(along with the Arduino UNO), reports user’s operation on the website to the hub Xbee via I2C. On the other hand, in terms of the communication between the ethernet shield and our laptop, there are two different communications happening. First, the laptop, as a client, requests the web page from the server, and interact with it. This part of the communication is based on the HTTP protocol on the application layer. Second, the laptop also listens to the serial port which connects to the arduino that is running the server. Each time the server realizes there is a “finding request” on the webpage, it will send to request and the username to laptop via serial communication. Once the python script which runs a twitter API on the laptop gets a command, it will post a message tracking the history of finding for each user.

We did have some big problems for this part, the first one is about the content of HTTP request. After the user testing on last Friday, we found that our checkboxes on the webpage could be confusing and misleading. So we replaced it with buttons. After the replacement, we couldn’t get the xbees responding correctly. It seemed that the server was not reading the HTTP_REQUESTs correctly. To be specific, the server was always reading both the current content in the “GET” message, and the previous one. After a long time of debugging, we realized that the reason is that we allowed the server to read the entire client request each time, and there was a part called “referrer” which records the previous content in the previous message. Therefore, we solved this problem by adding an upper bound to the length of the requests the server reads each time, avoiding the server from reading the unnecessary parts of message that could cause interference.

The second problem we had was on hardware, which was relatively simpler. Our initial plan was to stack the arduino, the Ethernet shield and the Xbee shield together, which would make a delicious hamburger of control center. However, we then find that this idea would not work. The truth is, the ICSP does cause problem in this case. It is true that the ICSP header doesn’t matter in terms of communication. However, it matters in terms of power supply. The Xbee shields gets it power right from the ICSP headers, and since there is no ICSP headers for Xbee on the Ethernet shield, we cannot stack them together. This is why we used another Arduino for the hub Xbee and used I2C to connect it with the ethernet part.

IMG_2676

Note: Here is the control centre! The board with the blue ethernet cable is the ethernet shield, and the one with the silver cable is the Xbee hub. 

Xbee System

Our alarming and found system is essentially consist of a hub and several receivers (in our case, three). First, for the communication between the hub and the Arduino running an ethernet shield, we used I2C. We thought we could stack the Arduino, Ethernet shield, and the Xbee shield together like a hamburger, but it turns out that the Xbee gets power from ICSP, and there is not ICSP output headers provided for Xbee shield on Ethernet shields. Then, when the Xbee hub receives the message about which stuff the user is looking for, denoted by 0, 1, and 2, the Xbee broadcasts the message, in the format of “stuff_id + command” to all of the receivers (since it is a one to many communication). After that, the receivers, which are assigned a unique id each, decodes the message by basically subtracting their id off the message, and if the outcome reads “H”, the buzzer goes off, thus notifying the user where the missing stuff is. Finally, when the user finds the buzzing stuff, he or she can go there and press the  button on the stuff to turn it off (The user can also turn things off on the website, which triggers the hub to send an “L”, with the same encoding and decoding scheme). What’s more, the whole Xbee system runs on a program that implements state machine, therefore, the receiver switches between different states(e.g buzzing or silence) in order to avoid any interference that might occur. The biggest problem about the Xbee system is on flow control. The Xbees do have flow control algorithms built in them, but since their receiving buffers are bigger than those on the UNO, we had to add our own flow control in our arduino codes. Of course we didn’t use the flow control algorithm as TCP, which would be horrible. Our idea is instead, making sure that the receivers always read faster than the speed of sending. What’s more, each microcontroller should clear its buffer before switching from a state to another, thus avoiding the system to become messed up by the redundant messages left in the buffer (the redundant messages are created because multiple signals are sent for any single commands to deal with potential packet loss, since wireless is less stable.)

IMG_2679

Note: Here comes the trackers, and their internal look. Again, the bear is just for being cute.

API

Considering the success rate of posting tweets,we didn’t use MKR1000 which runs with temboo. We chose an API offered by twitter and run it on our laptops. It can read data from serial, decode the data from Arduino to python, and then send tweets. But there are still several problems. First, you are not allowed to send the same tweets within around 24 hours. In order to solve this issue, we added a counter to it, incrementing it to record how many times a user has lost a certain stuff, and simultaneously avoiding creating duplicated tweets. But then we found that the counter wouldn’t restart from zero when we change to another user name. So we created a dictionary in Python to store all the user information, and solved this problem. After testing the API for several times, we found that the API would crush down when we login with “Diana”. It’s because in twitter, letter “d” (along with “F”, and “U”) are default shortcuts for directing message, follow or unfollow a certain person, and are thus reserved from API. To avoid the shortcut issue, we added “T_T” to the beginning of every tweet.

Screen Shot 2017-05-20 at 10.35.50

Note: the above is the result on the twitter account

Screen Shot 2017-05-20 at 10.35.36

Note: The above is the twitter API running in terminal.

Future Development & Takeaways

First of all, as what scott has suggested in class, having multiple “missing stuff” playing music all together might cause confusion, we think there are two approaches to deal with this. First, we could force that there can only be one device playing music at a time, which means that when the user is switching from one device to another without “shutting up” the previous device, that device will be turned off automatically before the second one go off. Actually, it was our original design, which can be implemented with only three more lines of codes. However, we then found that a user might want to find two stuffs at a time, and we then changed to adding the “give up” button which can turn off all the devices. Therefore, if we were to improvement on this part, we would take the second approach, that is, giving different devices different music, thus allowing users to differentiate each stuff from all the other.

Secondly, another improvement, a rather ambitious one, would be extending the system to broader use. At this moment, the system can only find the “temporarily missing stuff” within a certain range(about 30 meters), but people might want to find any of their belongings anywhere they want. This would require refinement in terms of at least two aspect. First, the server should be relocated. We would definitely need a online virtual server which can support actual multiple clients. Second, we can no longer rely on Xbees. In this “universal finding system”, every device needs to get online, report their location to the server.

In conclusion, we feel that the process of doing this project turns out to be a very rewarding experience. We get the chance to try out several relatively advanced communication protocols, ranging from HTTP(request and responses) between server and clients to Xbee one to many communication. Most of the communication approaches we picked were actually the ones in which we were interested, but not entirely confident with. Therefore, after doing this project, we feel that we gained a solid and comprehensive understanding of building and appreciating networks in modern context, which is truly a satisfying closure for our valuable experience in this exciting class over the semester.

Thank you for reading, now that’s see the video demo:

Code:

Web server:

Xbee hub:

Xbee receiver:

Python twitter API:

Note: I am only posting the script that calls the twitter post function, which also keep track of each user’s information. In terms of the twitter post script itself, it was provided by twitter, you can find it on the website of twitter API or on github.

(Week 12) Final project proposal, by Peter and Diana

Group member: Peter, Diana

Instrcutor: Scott

Here is the link to the slides:

https://docs.google.com/a/nyu.edu/presentation/d/1PV6XYeI_bs9ruzET4a9FTQclBWEZnu-ndSSGDgouvxc/edit?usp=sharing

For our final project, we plan to make a system that keeps track of the annoying things that disappear a lot, such as keys, chargers, and NYU id cards.

The system would utilise ethernet shields, Xbees, and so on, and serve a website for the user to control the missing stuff blink or buzz when they are hiding away from him.

For the details, see the slides~

 

(Week 11)Network Everything – Final inspiration, by Peter

After searching online, I found this project very interesting: A Wireless Hotspot that broadcasts the weather as its network name.

And here is the link: http://www.coin-operated.com/2015/02/01/wssid-2015/

Project description:

“WSSID” is a hacked wireless router that collects and broadcasts live weather data from it’s local environment  in the form of dynamically changing SSID names. The project examines the use of wireless networks on the environment and how we are always seemingly in range of these signals without explicitly asking to be. Operating similarly to the way that weather is beyond our direct control, “WSSID” shows us how permeating these networks have become by relaying direct environmental data about the locations in which they are housed.

So here is how this project looks like:

10914863_10153039164919763_347593978656318309_o

Inspiration:

Although this project is basically useless given the existence of all kinds of weather broadcast apps around, it is very interesting. For me, I think it would be even more interesting if we add some other inputs to it. As we can see above, to have a comprehensive weather report with this project would be bothersome and expensive, since you will need multiple routers. One change we might be able to do is add another physical input to it so that a user can decide what he or she wants to see. For instance, the physical input could be a switch. When the switch turns to temperature, the router changes its network name to the information about temperature. In this way, we only need one router to get comprehensive whether reports.

(Week 11) MKR1000 shield and API stuff

Group member: Diana, Peter

Instructor: Scott

This week, we worked on MKR1000 shield and learnt how to connect it to WIFI. Also, we learnt how to get information and service with API. Therefore, this weekend, we worked on applying these skills.

Idea

Our idea is to create a WIFI connect weather module, which utilises several LEDs to represent the temperature. What’s more, we added a fan, so that the module would turn the fan on when it is too hot. So, we used an API from Yahoo, which provides the information of weather in Shanghai. Then, we made everything work according to the data we get from the API.

Demo

Here is a demo of what we have done:

IMG_2654

Problems and Conquers:

We met tons of problems this time. For some we solved, but for some others we haven’t. The first problem we had is about converting the data to actual integer that we can use. The reason is that, the code provided by Temboo prints the data in character, to be specific, a character a time. It uses a while loop to print all the information from cloud(e.g. The temperature:\n 87). Therefore, we had to get the characters one by one, and distinguish numbers from letters. Furthermore, when we finally convert the characters to integer, we also had to use some math to interpret things like {8, 7} as 87. Luckily, this didn’t took too long.

The second problem we had is about the fan. Apparently, a fan would consume too much energy. Therefore, we used a transistor, i.e. TIP122. But a new problem comes that, we are not so sure about what transistor to choose, and the data sheet is too complicated to understand. We only get cleared that TIP122 is a N-P-N transistor, and what the “e”, “b”, “c” feet are. So, we took a random shot, but luckily it worked, with a pull-down resistor of 30 Ohm.

The third problem is the toughest, for which I am not confident that we have handled completely. The thing is that the WIFI shield is not very stable, especially when it is stressed out which too many outputs.  Sometimes we cannot get the information from the cloud. This happens periodically, and we guess it is still about energy, but we are not sure.

Code