Create an immersive environment with beautiful visuals that can emit the feelings of watching falling sakura and cherry blossoms.
-Kinetic Interfaces Final
I’ve always loved the idea of falling flower petals and attempted to recreate a similar scene for my Kinetic interfaces project. While my prior project incorporated falling flower petals, the main focus was really on the interaction and instead of, generating flower petals we used looped pngs in processing and thus suffered from fairly mediocre visuals.
Having specified the concept a bit further to be cherry blossoms, I now had a better idea and vision of exactly what I was trying to capture. I very much wanted to try and use three.js to generate all the graphics and create a strong visual as well as use some of the skills I’d developed in my other classes.
I first began by using one of my previous particle system sketches as a base, I learned not too long ago, embarrassingly enough, that you could very easily texture a GL Point with a 2D texture without a 3D model. This alleviated one of the long standing issues I had with the point systems in three.js, system performance. In all my prior projects, anytime I imported many non-native models and meshes I suffered from extremely poor system performance. By texturing the gl points I was able to create a large particle system textured particles.
Once I created my particle system I began animating them, this is where the bulk of my time on this project went. Since I was doing a visualization of nature I really needed to focus on organic and natural movement. Robotic or overly algorithmic movement would not cut it. I ended up creating 12 different particle systems each in a new THREE.Group, by putting different functions and movement patterns and rotations in each group, I hoped to create the illusion of randomness.
I quickly realized the limitation to these methods though, by texturing the points with 2D textures I was unable to truly rotate or twirl them. No matter how I moved the flowers it often felt extremely choppy or mechanical. I eventually sought after a different way of doing things, after looking through all the three.js example sketches I managed to find one that taught me about the draw shape functions. Using bezier curves I was able to draw several different petal-like shape that I could then turn into pseudo-3D models with the extrudeShape functions.
After playing with these new 3D petals I eventually decided to scrap everything I had done prior, as I didn’t feel like the obvious contrast between the two types of leaves would be jarring and actually hurt the immersiveness. I kept the same format of using several groups and systems but this time with better petals. I also added grass to my sketch by dissecting one of the three.js example sketches, the grass was made to move with sin waves.
I wanted for the project to have several different phases and patterns and I struggled immensely with creating beautiful chaos if you will. I eventually turned towards the autonomous agents that we had learned about in class. My main concern, however, was that everything in class was not only taught in p5, but was also object oriented. I had never done object oriented three.js and I was a bit clueless as to how to implement the third dimension. I considered looking into 3D Perlin noise and 3D vector fields but due to my inexperience and lack of skill I wasn’t really able to do much with the information I was able to find on the topics.
I eventually stumbled across the Three.js boids sketch which exhibited the autonomous agents we’d learned about in class except in 3D. I began to dissect it to try and make use of it but realized that it worked in a slightly different way that I was used to seeing it. In reality the concept was still the same I just wasn’t familiar enough with the concept to understand it presented differently. I mistakenly thought that I needed to do object oriented programming in order to make use of their code so I created a new sketch and tried to make single object oriented particles, how ever the Three.Points method doesn’t really seem to lend itself to object oriented creation of single points and I struggled immensely with accessing my points etc.
I realized that with my 3D pseudo models however, that I could substitute them in for the birds of the boids sketch . Was successfully able to modify and use that code to create 3D attraction, repulsion and flocking. With that out of the way I wanted to add interaction to my project, since I was still working on kinect to wbebsocket etc I ended up doing a proof of concept first using attraction to mouse point. I realized that unlike my previous projects where I projected my 2D position into the 3D world with recasting etc, I had to unprotect and do the opposite. This required several different functions and methods that I had never really heard about so I actually just StackOverflowed for a while until I found some people who had done similar things. I was able to adapt some of their solutions into my own sketch, creating an invisible sphere which was mapped to my mouse position, which would then attract any flower petals within a certain vicinity.
At this point in the project, it was actually already around Tuesday or Wednesday, so technically past the initial due date, but due to Moon’s graciousness I had been scheduled to present on Thursday and had a few more days. I stayed overnight at the academic building from Tuesday to Thursday without going home working on this assignment. The last night being rather unproductive as it was spent trying to learn windows command line and reading module errors, I was a novice at node and with the different SDK’s and kinects I spent many hours the last night just downloading windows 8 and different programs etc, the whole process of just downloading software cost me 9hrs of my last day. Even after the 9 hours, for whatever reason, the kinect was not recognized by the desktop I had borrowed from a friend, and I had no choice but to ask another close friend to trade computers with me for the coming few days. He graciously agreed and I was able to download the node kinect2 library. After some playing around with node.js examples and using some of my prior knowledge I managed to get the skeletal tracking to work over web sockets. However, I drastically underestimated how long it’d take me to calibrate and re-unproject points etc and map thKinectct world into the 3D world of my existing project. While I managed to get the leaves to attract to both hands the user’s head, it was often not extremely sensitive and very spatially dependent. For the actual show, because I didn’t actually have access to the presentation room until 2 hours prior, and I didn’t know about how the other projects would be set up I was unable to really utilize the Kinect the way I wanted to. No matter how I positioned the Kinect the points of attraction simply would not be in a friendly place, i.e. users would have to bend down quite low and hold their hands quite high for the Kinect to sense it and it would map very differently in my sketch, instead of being in the exact physical location as I had hoped. For the purpose of creating still beautiful visuals etc for a large duration of the show I manually forced the sketch to stay in the auto-flocking pattern which generated more beautiful visuals at the cost of interactivity.
I can’t lie, Prof. Moon and or his classes have had the most pronounced impact on me and the direction I’ve decided to take my academic career. His mini intro in three.js in Kinetic interfaces has led to my obsession with it, and the installations we built in KI also have influenced me to think bigger and be more ambitious with physical computing. With the help of Jiwon (God bless her beautiful soul) I was able to use the wood of the woodshed and build a small corridor room for users to walk into, employing the genius technique that I had witnessed and was recommended to me from Moon about Void itp, instead of using an opaque white curtain we used 8-9 extremely thin layers of tulle. I designed the room to have “wings” sticking out which would allow for several walls. This simple adjustment took the visuals to another level and brought out a surreal 3D depth to the images.
Because of the lack of time, and dedicated space to build etc I wasn’t able to engineer a system that actually systematically pumped the tulle layers with perfume. Instead, I made a mad dash to the mall at Century Ave on the day of the show and bought some earthy forest mist from Innisfree. I would have gone with a regular perfume but they didn’t carry any at the time, this turned into a huge mistake as even though I emptied over half the bottle on the walls and the wood frames, the smell still wasn’t that impactful.