Nature of Code: Maggie Walsh Final

For my final for Nature of Code, I wanted to simulate the natural way a dandelion moves. I ran in to quite a few obstacles, but regardless learned a lot during the process.

In the end, my sketch ended up being a dandelion that has its seeds slowly blown away, then the sketch follows the seeds and displays a poem about dandelions set to some music.

This is the poem:

dandelion_poem_by_suzy_kassem_by_castleblackjack-d93dtb3

And this is the song

I must say, for my tastes my project is a little too dainty. I enjoy making bright flashy things, and I really think I missed the mark on what I should have chosen for my Nature of Code final. I ended up coming up against a lot of struggles, and instead of diving into them, I avoided them, and searched out alternate solutions. Sometimes this is an okay thing to do, but I did not give sufficient effort into solving problems, and instead just searched for different answers. I think I dug myself into a deeper hole by doing this. For example, initially I tried all of the following things without much success

  • Pixel Array Sky
  • Pixel Array Ground
  • Connect Dots particles on outside with inner bud
  • Not lines, just many particles and low frameRate.
  • Calculating Distance between particle and initial bud to wait to apply forces
  • Creating a count to decide when to apply forces to flyaway particles
  • Grow a new stem when the particles bloom.
  • Fade the stem as it grows
  • The count worked for this one
  • Flow field to control particles.

Thankfully, Moon helped me so much in realizing the sketch in the end. He helped me up my frameRate, because my entire code structure was very weak in the beginning. It was not logically sound and he helped that. He also helped me a lot with making the screen translate to the seeds location, and making the seeds fly more naturally using sin waves.

 

I really enjoyed this project, but it caused me a lot of stress. I think I should have done something with mapping sound like I did for my midterm, because I really enjoyed that project.

Future:

Because of what I just said, I think it would be interesting to experiment with sound in this project. Perhaps sound could map the way the way the dandelions fly, kind of like my midterm.

Also, I would like to maybe do an iteration of this project with physical input from an arduino involved. I initially had that planned, and I even worked quite a bit on setting up the arduino to control the sketch through a sensor, but I only got it working a bit with a potentiometer, and not the piezo or microphone, like I had previously imagined (Also, the lab did not have those two sensors in stock, so that is another factor).

Thank you for an incredible year Moon! 🙂

Here is my code!

 

 

NOC – Week 1 – Bouncing Ball – Elaine

This is my first assignment during the semester, and I did not have this account back then…

For this project, I tried to create a bouncing ball that in some sense controlled by the user. Click up, down, left, and right arrow keys to control the ball. Click space key to stop the movement.

The project demo is here:

https://elaineang.github.io/NatureOfCode/Assignment1/

Completer Source code could be found here:

https://github.com/ElaineAng/NatureOfCode/tree/master/Assignment1

 

Nature of Code: Maggie Walsh Midterm

My midterm for nature of code was one of the things I am most proud of creating! I really like how I connected the concepts we learned in class to inputs from music, and made something that looked natural and cohesive.

My main struggles with this project were with the particle system, and getting that to respond well, and within the correct frameRate.

Additionally, I had to map the values of the input from the music correctly so that it was effective in its visuals.

In the future i really really want to work more on applying different parts of the music to my sketch. I want to work with frequencies, not just volume. And I would love to look at the speed of the music if I could figure that out! 🙂 And I would map that to the frequency of the sine wave.

One of my favorite things to do in Nature of Code, as you have probably seen throughout my sketch is to map color. 🙂

Here is the code for the project.

NOC – Week 10 Assignment – Kevin Li

Instead of autonomous agents, I decided to code cellular automata (which is sort of related in topic). I made a Game of Life in Processing.

Screen Shot 2017-05-21 at 1.44.06 PM

The rules of the game are as follows.

The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, alive or dead, or “populated” or “unpopulated”. Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:

  1. Any live cell with fewer than two live neighbours dies, as if caused by underpopulation.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overpopulation.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

This was relatively easy to do in p5.js (actually less than 20 lines).

I based my sketch off of example code found from: https://p5js.org/examples/simulate-game-of-life.html

NOC – Week 9 Assignment – Kevin Li

I have done a lot of particle systems as assignments in this class (my midterm, particle engine, etc.) so I will try something simpler for this assignment.

Instead of the many shapes that we generated in class, I made a simple snow falling sketch with particle systems.

Screen Shot 2017-05-21 at 12.35.52 PM

I actually played around with depth and z-indexing (layers) to get a more realistic 3-D effect.

Screen Shot 2017-05-21 at 1.54.23 PM

 

 

NOC – Final Project Documentation – Kevin Li

Project Title
Metempsychosis
Project Video

Project Images
mov
Description of Development of the Project
This project was completed in approximately three weeks time. I began the project wishing to learn more about GLSL after seeing the incredible demos on ShaderToy as well as WebGL demos on Chrome Experiments that use shaders. GLSL is a programming language that executes directly on your GPU. These programs are called “shaders” – tiny computer programs that explain how to draw things. The big conceptual shift when considering shaders is that they run in parallel. Instead of looping sequentially through each pixel one-by-one, shaders are applied to each pixel simultaneously, thus taking advantage of the parallel architecture of the GPU.
This is very powerful and the basis to a concept called GPGPU. GPGPU is basically exploiting the parallel nature of the graphics card to do more than simply rendering to the screen, namely, to do computation that you would normally do on the CPU. While normally a GPU uses vertex and pixel processors performs shading on a mesh, GPGPU uses these processors to do calculations and simulations. We can perform GPGPU in concert with WebGL via a process called render-to-texture (RTT). In essence, the output of a shader can be a texture, and that texture can be input for another shader. If we can store these two or more textures (or bitmaps) inside memory or RAM (these are referred to as Frame Buffer Objects (FBOs for short)), then we can read and write to a buffer without affecting the parallel nature of the GPU. We also exploit the fact that while the Frame Buffer typically holds RGB(A) values, we could instead use it to store XYZ values. This allows us to simulate and store particle position data in fragment shaders as textures in the GPU rather than as objects in the CPU side. This allows for massive speed-ups due to parallelization.
I did a few days of experiments with learning about GLSL, picking up basic shader language and syntax, understanding the difference between vertex and fragment shaders, clip coordinates, coordinate systems, the model / view / projection matrix, and having a basic understanding of matrix translation, rotation, scaling. I wanted to understand just enough to be able to understand how to write syntax for shaders and how to implement the FBO simulations.
I rewrote and implemented parts of both @nicoptere (https://github.com/nicoptere/FBO) @cabbibo’s (https://github.com/cabbibo/PhysicsRenderer) and @zz85’s (https://threejs.org/examples/webgl_gpgpu_birds.html) FBO / GPGPU simulation / vertex / fragment shaders as well as the “ping-pong” texture flipping technique to get this result.
Screen Shot 2017-05-20 at 5.07.42 PM
Screen Shot 2017-05-20 at 5.07.47 PM
I added simplex noise and curl noise from Ashima Arts (https://github.com/ashima/webgl-noise).
Screen Shot 2017-05-20 at 5.07.54 PM
I then modeled a few simple shapes (plane, sphere, rose) with mathematical functions (parametric surfaces) as defined below as well as imported 3D .obj models from Three Dreams of Black (an interactive WEBGL video – http://www.ro.me/tech/) to place particles on. Particles were placed on the vertices of the 3D mesh.
The physics of the project is straightforward and consists of a continual gravitational attraction towards the defining shape (whether a shape or a mesh) and a repulsive force when disturbed with at a particular point (mouse on click). Forces are further controlled with decay applied on velocity and strength of attraction or repulsion. Curl noise can also be added to the velocity.
Resources
http://barradeau.com/blog/?p=621
http://www.lab4games.net/zz85/blog/2013/12/30/webgl-gpgpu-and-flocking-part-1/
http://www.lab4games.net/zz85/blog/2014/02/17/webgl-gpgpu-flocking-birds-part-ii-shaders/
http://www.lab4games.net/zz85/blog/2014/04/28/webgl-gpgpu-flocking-birds-the-3rd-movement/
https://github.com/nicoptere/FBO
http://www.hackbarth-gfx.com/2013/03/17/making-of-1-million-particles/
https://www.youtube.com/watch?v=HtF2qWKM_go

NOC – Final Project: Color Adder, Sherry

Final Presentation Slides

I wanted to recreate a fish pond with flocking system so I experimented with my week 11 assignment flocking fish by using map() to convert the x and y position to color values of the fish so the color changes as fish move. (Code for this sketch can be found here).

I thought it was very beautiful and interesting therefore I decided to do something related to color changing instead of sticking to the fish idea.

I combined dat.gui.js with flocking system and created a very basic sketch in which users can hit buttons to generate two boids of different color they pick before. By pressing “blend” button, two boids will move to the center by seeking force and separating force, and during this process their colors will gradually change into a third color whose value is calculated by “red(c1) + red(c2), green(c1) + green(c2), blue(c1) + blue(c2)”. I still used map() because I want a more linear change. lerpColor() can work, but the color is altered too fast at the very beginning and fail to create the slowly merging effect I want.
Later I thought having only the colorful particles was a little boring, and I recalled what Moon suggested during the final concept presentation that it can work as an educational/informative project about color. So I built a function to show the name of the blended color also with particles. (I borrowed Chirag Mehta‘s ntc.js to get the name of colors. Thanks to his great library!) A very important problem I needed to deal with is how to let particles display in the shape of words. At first I came up with a method that is to create a text in where I want the particles to be, and then check pixels in and around the text area with a step of 5 whether the color of this pixel is the same as the third color. If so, a boid is created and displayed at this position. I modified the position of these particles by adding sin/cos values so as to add some energy to these particles. To end each color adding trial, I built reset function. Particles will explode toward outside the screen and shrink as time goes by. Once it moves out of screen or its lifespan decreases to below 0 it will be spliced from the array.
This sketch seemed to work well, but I found that once I pick some random colors it would return error. After a long time of debugging I realized that it is because when we pick colors on the dropdown palette in control panel, the values are float numbers, not integers. This would cause problem in dec to hex conversion and getting name of the color. After I added int() to all values the bug was fixed. [Code]

But another obvious issue with this version is that it takes a long time to go through pixels within a certain area and the sketch gets stuck when doing this. I tried to solve it by increasing the step (x+=8 instead of x+=5) and decreasing the size of the text, but it didn’t help that much and lost many points like this pictureaaa

While trying to solve this problem, I kept coming up with new improvements such as building functions to show names of addend colors as well and relating the amplitude of oscillation to the value of saturation of the color. The more saturated the color is, the more violently the particles vibrate. [Code]

After researching online, I found that there’s a function called textToPoints() that can return an array of points on the outline of text in the font you call the function on. It improves system performance a lot and there’s no delay. [Code]

Another improvement I would like to implement is to have particles move and then form the color name rather than directly appear in place. I tried to use the same boids for generating color and moving to form words (a rough demo below)

However it turned out that for each word there were hundreds or even thousands of points. In this case at the very beginning there needs to be hundreds of particles flocking, which is an extremely heavy burden to my laptop. To keep my sketch running smoothly, I did two adjustments: 1) giving up the soft body model I drew for every single boid instance (using vertex polygons to create soft, bouncing feeling); 2) separating the generating color boids from moving to form text boids. So when “name” button is clicked, a new array of boids will be generated at the center of the original flocking boids. The new particles will each be assigned a target point on the text and move according to the seeking force. I disabled separating force, in order to keep frame rate high. Apart from that I tested a lot to try my best to optimize my code and that’s really a tricky part. For example, every time either addend color is changed, the blended color should be recalculated, and the boids should be recreated. So these instructions should be executed more than once, but at the same time performing them once every frame will slow down the sketch a lot. My solution is to monitor color changing constantly, but only recalculate and regenerate things when it is the first time being executed or one of the two colors is changed.

After all these efforts, here’s my final demo!

It feels really good to create something I enjoy using the knowledge we have learnt in class!! Bringing in a natural and organic feeling into art makes the art even more beautiful 🙂

[Final Code]

 

NOC – append.(‘User’) v3 – bh1525 – Prof. Moon

Goal:

Create an immersive environment with beautiful visuals that can emit the feelings of watching falling sakura and cherry blossoms.

Inspiration:

-Kinetic Interfaces Final

-Cherry Blossoms

-Sakura

Ideation:

I’ve always loved the idea of falling flower petals and attempted to recreate a similar scene for my Kinetic interfaces project. While my prior project incorporated falling flower petals, the main focus was really on the interaction and instead of, generating flower petals we used looped pngs in processing and thus suffered from fairly mediocre visuals.

Having specified the concept a bit further to be cherry blossoms, I now had a better idea and vision of exactly what I was trying to capture. I very much wanted to try and use three.js to generate all the graphics and create a strong visual as well as use some of the skills I’d developed in my other classes.

Coding Process:

I first began by using one of my previous particle system sketches as a base, I learned not too long ago, embarrassingly enough, that you could very easily texture a GL Point with a 2D texture without a 3D model. This alleviated one of the long standing issues I had with the point systems in three.js, system performance. In all my prior projects, anytime I imported many non-native models and meshes I suffered from extremely poor system performance. By texturing the gl points I was able to create a large particle system textured particles.

Once I created my particle system I began animating them, this is where the bulk of my time on this project went. Since I was doing a visualization of nature I really needed to focus on organic and natural movement. Robotic or overly algorithmic movement would not cut it. I ended up creating 12 different particle systems each in a new THREE.Group, by putting different functions and movement patterns and rotations in each group, I hoped to create the illusion of randomness.

I quickly realized the limitation to these methods though, by texturing the points with 2D textures I was unable to truly rotate or twirl them. No matter how I moved the flowers it often felt extremely choppy or mechanical. I eventually sought after a different way of doing things, after looking through all the three.js example sketches I managed to find one that taught me about the draw shape functions. Using bezier curves I was able to draw several different petal-like shape that I could then turn into pseudo-3D models with the extrudeShape functions.

After playing with these new 3D petals I eventually decided to scrap everything I had done prior, as I didn’t feel like the obvious contrast between the two types of leaves would be jarring and actually hurt the immersiveness. I kept the same format of using several groups and systems but this time with better petals. I also added grass to my sketch by dissecting one of the three.js example sketches, the grass was made to move with sin waves.

I wanted for the project to have several different phases and patterns and I struggled immensely with creating beautiful chaos if you will. I eventually turned towards the autonomous agents that we had learned about in class. My main concern, however, was that everything in class was not only taught in p5, but was also object oriented. I had never done object oriented three.js and I was a bit clueless as to how to implement the third dimension. I considered looking into 3D Perlin noise and 3D vector fields but due to my inexperience and lack of skill I wasn’t really able to do much with the information I was able to find on the topics.

I eventually stumbled across the Three.js boids sketch which exhibited the autonomous agents we’d learned about in class except in 3D. I began to dissect it to try and make use of it but realized that it worked in a slightly different way that I was used to seeing it. In reality the concept was still the same I just wasn’t familiar enough with the concept to understand it presented differently. I mistakenly thought that I needed to do object oriented programming in order to make use of their code so I created a new sketch and tried to make single object oriented particles, how ever the Three.Points method doesn’t really seem to lend itself to object oriented creation of single points and I struggled immensely with accessing my points etc.

I realized that with my 3D pseudo models however, that I could substitute them in for the birds of the boids sketch . Was successfully able to modify and use that code to create 3D attraction, repulsion and flocking. With that out of the way I wanted to add interaction to my project, since I was still working on kinect to wbebsocket etc I ended up doing a proof of concept first using attraction to mouse point. I realized that unlike my previous projects where I projected my 2D position into the 3D world with recasting etc, I had to unprotect and do the opposite. This required several different functions and methods that I had never really heard about so I actually just StackOverflowed for a while until I found some people who had done similar things. I was able to adapt some of their solutions into my own sketch, creating an invisible sphere which was mapped to my mouse position, which would then attract any flower petals within a certain vicinity.

At this point in the project, it was actually already around Tuesday or Wednesday, so technically past the initial due date, but due to Moon’s graciousness I had been scheduled to present on Thursday and had a few more days. I stayed overnight at the academic building from Tuesday to Thursday without going home working on this assignment. The last night being rather unproductive as it was spent trying to learn windows command line and reading module errors, I was a novice at node and with the different SDK’s and kinects I spent many hours the last night just downloading windows 8 and different programs etc, the whole process of just downloading software cost me 9hrs of my last day. Even after the 9 hours, for whatever reason, the kinect was not recognized by the desktop I had borrowed from a friend, and I had no choice but to ask another close friend to trade computers with me for the coming few days. He graciously agreed and I was able to download the node kinect2 library. After some playing around with node.js examples and using some of my prior knowledge I managed to get the skeletal tracking to work over web sockets. However, I drastically underestimated how long it’d take me to calibrate and re-unproject points etc and map thKinectct world into the 3D world of my existing project. While I managed to get the leaves to attract to both hands the user’s head, it was often not extremely sensitive and very spatially dependent. For the actual show, because I didn’t actually have access to the presentation room until 2 hours prior, and I didn’t know about how the other projects would be set up I was unable to really utilize the Kinect the way I wanted to. No matter how I positioned the Kinect the points of attraction simply would not be in a friendly place, i.e. users would have to bend down quite low and hold their hands quite high for the Kinect to sense it and it would map very differently in my sketch, instead of being in the exact physical location as I had hoped. For the purpose of creating still beautiful visuals etc for a large duration of the show I manually forced the sketch to stay in the auto-flocking pattern which generated more beautiful visuals at the cost of interactivity.

Building Process:

I can’t lie, Prof. Moon and or his classes have had the most pronounced impact on me and the direction I’ve decided to take my academic career. His mini intro in three.js in Kinetic interfaces has led to my obsession with it, and the installations we built in KI also have influenced me to think bigger and be more ambitious with physical computing. With the help of Jiwon (God bless her beautiful soul) I was able to use the wood of the woodshed and build a small corridor room for users to walk into, employing the genius technique that I had witnessed and was recommended to me from Moon about Void itp, instead of using an opaque white curtain we used 8-9 extremely thin layers of tulle. I designed the room to have “wings” sticking out which would allow for several walls. This simple adjustment took the visuals to another level and brought out a surreal 3D depth to the images.

Because of the lack of time, and dedicated space to build etc I wasn’t able to engineer a system that actually systematically pumped the tulle layers with perfume. Instead, I made a mad dash to the mall at Century Ave on the day of the show and bought some earthy forest mist from Innisfree. I would have gone with a regular perfume but they didn’t carry any at the time, this turned into a huge mistake as even though I emptied over half the bottle on the walls and the wood frames, the smell still wasn’t that impactful.

 

Pattern 2:

Pattern 1:

 

pattern3

Frame:

WechatIMG2

Final Show:

Unbuilt frame:

WechatIMG1