# Kinetic Interfaces (Midterm): A Band – Sherry

Partner: Peter

Ideation:  Peter and I agreed on doing an instrument simulation using Leap Motion after a brief discussion, but later I found this fun project in Chrome music lab and came up with the idea of a choir conductor with Leap Motion. When meeting with Moon, he suggested that we combine two ideas together with the help of oscP5 library, so we ended up creating a band in which a conductor controls several musical instruments, each conductor/instrument running on a processing sketch with a Leap Motion.

Implementation: Due to time constraints, we only had one kind of instrument — guitar communicate with the conductor. Peter was in charge of the implementation of conductor, and I worked on the guitar. We both did some research and got a basic understanding of how oscP5 works.

I got the inspiration from Garageband and tried to draw a horizontal guitar interface as shown in the picture above, but during testing I found that the distance between strings were too close and the accuracy was very low. It was difficult to pluck certain string(s). Therefore I decided to switch to a vertical interface:

Since Leap Motion has a wide detection range (is more sensitive) on x-axis, the accuracy becomes higher and user experience is better. Strings have different stroke weights to imitate the real guitar.

Above was my first version of string triggering code. However, under this circumstance, messages will continuously be printed out (and sound file will be played again and again) if my finger stays in the range, which wouldn’t happen if we play a real guitar. To solve this problem, I changed the algorithm of determining string plucking.

The main idea is to figure out which strings are in between previous and current finger positions and play corresponding sounds in order. I denote six strings as index 0 to 5, then floats “start” and “end” are the corresponding indices of previous and current finger positions. If start is greater than end, it means the user swiped to the left, and strings in between are triggered from right to left, and vice versa. With this algorithm, holding the finger on a string doesn’t trigger the sound, making the experience more realistic.

I introduced z-index of Leap Motion to simulate the effect of user’s finger on/above the string. When z is less than the threshold (user’s finger moving closer to his/her body), the dot that shows the position of user’s index finger is white, indicating it’s above the string and won’t trigger any sound as it moves. When z is greater that the threshold, the dot becomes red and sounds will be triggered.

String vibration effect and sine waves were added to enhance the visual experience.

Video demo of the leap motion guitar:

Then I started to study oscP5 library. Thanks to “oscP5sendreceive” example by andreas schlegel and noguchi’s example, I created a very basic data communication program. mouseX and mouseY in osc1 (the left sketch) were sent to osc2 (the right sketch) and used to draw an ellipse on canvas.

Later I met Peter and he used oscP5tcp example for the conductor, so we decided to use tcp for both. Initially we planed to pass three parameters: volume modifier, frequency modifier and mute boolean value, but we met two problems. Because of the limitation of minim library, we couldn’t change the volume and frequency of a sound file. After several trials we managed to modify the volume using setGain() instead of setVolume(), but unfortunately we could do nothing about the frequency.

Final demo:

Guitar:
Index finger: swipes horizontally above the Leap Motion to pluck the strings
moves back towards body to move away from the strings
Dot on the screen: red – available, will trigger sounds if moving across strings
white – unavailable, either muted by conductor or too close to the body

Conductor:
Hand: moves up and down in certain instrument’s part of screen to increase/decrease the volume
grab to mute certain instrument

Feedback: Professor Chen brought up the “why” question, which I think is quite important and deserves further reflection. I agree with her that the idea that the conductor can actually have control over other users is great, and I can’t really answer the question why we need to have a leap motion simulating real instrument while the experience of playing physical instruments is good enough. I’m thinking of keeping the tech part but wrapping it with a different idea that is more interesting or more meaningful (though I have no idea for now).

# Kinetic Interfaces: W3 Assignment – Fruit Ninja (simplified)

I built a simple version of Fruit Ninja in Processing for this week’s assignment.

In this simple version there are two modes: classic mode and zen mode. In classic mode there are bombs that will end the game as soon as the blade (mouse cursor) hovers on it, while in zen mode there are no bombs. You can move your mouse around and enjoy the game:)

I created the first class Fruit, the implementation is very much similar to the ball example we did in class.  In addition, I added gravity.

I adjusted the value of initial velocity and gravity a little bit to make it look more natural. Also I set the red value to 0 for fruits so that later the red bomb in classic mode could stand out.

This is basically what zen mode looks like:

I created a new class Bomb, but some basic functions like display(), move(), acc() were the same as those of Fruit. One new boolean isDead was added in order to end the game.

The rule is only one bomb can appear on the screen at one time.

Classic mode:

I also wrote a class for the button. It has three states: not pressed, pressed and hovered. In the main sketch, mode name is passed into the button’s drawing function.

The last step is to draw the blade. I drew a diamond trail with an arraylist of mouse coordinates to simulate the awesome and good looking blade in the original game. However, I found that my blade was always “hung from” (0,0) on the canvas:

I tried to debug in many ways, but none of them worked. I rewrote the loop in draw() but nothing changed. I printed the center coordinate of the tail diamond shape, and it was the right position that the diamond should have been drawn. Then I realized that it might be something wrong with the diamond drawing.

Finally I figured out that it was because I put scale() before translate(), which messed up all the positions. I put it after translate() and everything was fine!

My final version demo:

# Kinetic Interfaces: W2 Assignment – Candle (Sherry)

This week I create a candle in Processing using user-defined functions and transformation functions.

Basically, you can light up the candle by hovering the mouse on it and put it out by holding down the mouse button to release the smoke (carbon dioxide?) and moving to where fire is.

My three user-defined functions:
1. drawCandle(float y):
Since the candle gets shorter as it burns, I set the y-coordinate as a parameter of this function.

2. drawFlame(float x, float y):
I used a semicircle and 20 horizontal lines of different lengths (given by cosine) to draw a natural  flame shape.

I translated the center of the flame to the parameters (x, y) so that this function can be used to draw flames both on the tip of candle and following the cursor. Also I scaled the flame between 1±0.15 randomly every 15 frames to simulate the flickering effect, but here I found that there was always a black line moving up and down on the flame. I haven’t figure out the reason yet.

3.drawSmoke(float r, float x, float y):
I collected the coordinates of vertices by using the below sketch

Then I used transformation functions to rotate and scale it.

So the final sketch looks like this:

# Animation – Final Project: Photoshop Tool Basics (feat. Pacman)

Title: Photoshop Tool Basics (feat. Pacman)

Screenshot:

Description: Learn how to use tools in Adobe Photoshop with Pacman!

Credit: music: Rags 2 Riches Rag – Audionautix on Youtube audio library
sound effects: all from freesound.org, public domain

My blog: https://sherry-zc.tumblr.com/

Actually my idea changed a lot throughout the second half semester. Initially I wanted to simulate a side-scrolling game, but extra actor of the main character would be needed to help me finish the animation and having someone that wasn’t enrolled in this class spend much time for my animation seemed to be impractical. Therefore, I changed my mind and decided to do an animation about color mismatch, but due to a property accident there wasn’t enough time left for me to shoot some scenes, and I also found that it was hard to keep consistency between different takes of the same scene because they were filmed in different days and time (and I didn’t use dragonframe). So I kind of kept the idea of doing something interesting as well as having a weird/mismatch feeling and finalized my idea —  to do a photoshop tutorial with paper cutouts simulating effects of different photoshop tools. I expected that after a short time of normal demo, things will go wild and become funny. I received feedback that I could use pacman as the character fighting against the cursor in order not to be photoshoped. I didn’t mean to let pacman interact with cursor, but I thought it would be really funny to do so, so I took this idea and developed my final story.

(I can’t rotate it 🙁 )

For my animation I mainly used Dragon Frame to shoot frame by frame stop motion animation (the canvas) and scanned the paper collage Photoshop UI as the workspace. I used Premiere to edit, stabilize and merge Dragon Frame video clips, Photoshop to draw cursors and tool icons, crop out useful items (such as mac close buttons) from photos, and do other photo editing, and After Effects to composite my final animation and add sounds to it. I thought shooting photos would take up the most time, but the fact is I spent several times more time on After Effects. I need to take care of many subtle changes and details to make it look like what we actually do when we are using photoshop. For example, cursor is different when it’s on the canvas or on the workspace. Every time I switch to another tool, the tool bar should be replaced to the version where this tool is selected.

One of the most important thing I learned is to try to look at the bigger picture from the very beginning of a huge project. I changed my idea twice because I found new problems as I did the animation. If I could think over the potential obstacles and problems beforehand, some of the unnecessary trials could be avoided. Apart from that, I really learned a lot about AE. Before this course I knew nothing about it. But now I start to get familiar with it and learned how to organize files / compositions for a big project so that I won’t mess up when I want to modify only part of it. The last but not least thing I learned is sound really helps a lot tell the story in animation. In my animation if there were no sound effects, mouse clicking and character actions would not be highlighted, and audience wouldn’t have that strong immersive feeling.

# NOC – Final Project: Color Adder, Sherry

Final Presentation Slides

I wanted to recreate a fish pond with flocking system so I experimented with my week 11 assignment flocking fish by using map() to convert the x and y position to color values of the fish so the color changes as fish move. (Code for this sketch can be found here).

I thought it was very beautiful and interesting therefore I decided to do something related to color changing instead of sticking to the fish idea.

I combined dat.gui.js with flocking system and created a very basic sketch in which users can hit buttons to generate two boids of different color they pick before. By pressing “blend” button, two boids will move to the center by seeking force and separating force, and during this process their colors will gradually change into a third color whose value is calculated by “red(c1) + red(c2), green(c1) + green(c2), blue(c1) + blue(c2)”. I still used map() because I want a more linear change. lerpColor() can work, but the color is altered too fast at the very beginning and fail to create the slowly merging effect I want.
Later I thought having only the colorful particles was a little boring, and I recalled what Moon suggested during the final concept presentation that it can work as an educational/informative project about color. So I built a function to show the name of the blended color also with particles. (I borrowed Chirag Mehta‘s ntc.js to get the name of colors. Thanks to his great library!) A very important problem I needed to deal with is how to let particles display in the shape of words. At first I came up with a method that is to create a text in where I want the particles to be, and then check pixels in and around the text area with a step of 5 whether the color of this pixel is the same as the third color. If so, a boid is created and displayed at this position. I modified the position of these particles by adding sin/cos values so as to add some energy to these particles. To end each color adding trial, I built reset function. Particles will explode toward outside the screen and shrink as time goes by. Once it moves out of screen or its lifespan decreases to below 0 it will be spliced from the array.
This sketch seemed to work well, but I found that once I pick some random colors it would return error. After a long time of debugging I realized that it is because when we pick colors on the dropdown palette in control panel, the values are float numbers, not integers. This would cause problem in dec to hex conversion and getting name of the color. After I added int() to all values the bug was fixed. [Code]

But another obvious issue with this version is that it takes a long time to go through pixels within a certain area and the sketch gets stuck when doing this. I tried to solve it by increasing the step (x+=8 instead of x+=5) and decreasing the size of the text, but it didn’t help that much and lost many points like this picture

While trying to solve this problem, I kept coming up with new improvements such as building functions to show names of addend colors as well and relating the amplitude of oscillation to the value of saturation of the color. The more saturated the color is, the more violently the particles vibrate. [Code]

After researching online, I found that there’s a function called textToPoints() that can return an array of points on the outline of text in the font you call the function on. It improves system performance a lot and there’s no delay. [Code]

Another improvement I would like to implement is to have particles move and then form the color name rather than directly appear in place. I tried to use the same boids for generating color and moving to form words (a rough demo below)

However it turned out that for each word there were hundreds or even thousands of points. In this case at the very beginning there needs to be hundreds of particles flocking, which is an extremely heavy burden to my laptop. To keep my sketch running smoothly, I did two adjustments: 1) giving up the soft body model I drew for every single boid instance (using vertex polygons to create soft, bouncing feeling); 2) separating the generating color boids from moving to form text boids. So when “name” button is clicked, a new array of boids will be generated at the center of the original flocking boids. The new particles will each be assigned a target point on the text and move according to the seeking force. I disabled separating force, in order to keep frame rate high. Apart from that I tested a lot to try my best to optimize my code and that’s really a tricky part. For example, every time either addend color is changed, the blended color should be recalculated, and the boids should be recreated. So these instructions should be executed more than once, but at the same time performing them once every frame will slow down the sketch a lot. My solution is to monitor color changing constantly, but only recalculate and regenerate things when it is the first time being executed or one of the two colors is changed.

After all these efforts, here’s my final demo!

It feels really good to create something I enjoy using the knowledge we have learnt in class!! Bringing in a natural and organic feeling into art makes the art even more beautiful 🙂

# NOC – Week 11: Flocking fish, Sherry

For weekly assignment for flocking I did a test for my final project. I combined the soft body example I mentioned in my presentation with flocking example. In this version I only enable seeking force (following the mouse) and separating force.