Week 13: Greene Response – Szetela

Rachel Greene’s Internet Art shows how artists have employ online technologies (websites in general) to create new forms of art, and to move into fields normally beyond what one would deem the “artistic realm.”

I think what is really interesting about the advent of the Internet is that the Internet Art feels quite different than artwork that has been created in the past because it has the opportunity to be created and experienced by so many people. Many people that are not “artists,” including me, have used the Internet and technology to create and explore art. The Internet has made many people that would have never considered themselves an a artist or a creator. The Internet has also provided lots of opportunity to view others artwork, to become influenced by others and to continually expand the domain of knowledge that goes into creating art. Technology in general has also allowed art to take on many new forms, whether its interactive, video, audio, even collaborative in real-time (reddit’s pixel art – the place).

Week 11-12: Graham and Response – Szetela

Rand Response, “Computers, Pencils, and Brushes”

I don’t agree with Rand’s response (that of which opposes Graham’s) that the computer is a tool and cannot be used to create true art (or that it forms a barrier) between the artist and what the artist wants to achieve. I believe that the computer is nothing more than a very advanced pencil or brush. Rand is correct in that with the advent of the computer, it is “easier” to create ideas, concepts, blueprints, design, but it is nothing more than that. Behind the computer still requires someone who has the experience to create beautiful work (art) that others will appreciate. The computer is a way to push art forward, into new boundaries, connecting the physical and digital worlds. I don’t think art is so strictly defined as Rand believes it to be.

Response to Graham’s “Hackers and Painters”

I found Paul Graham’s article and interesting read. He believes that hackers are similar to painters and considers hackers to be “makers” rather than pure scientists. He does not believe the terms “computer science” is the appropriate term, rather he considers hackers to be somewhat between an architect and an engineer. He writes that the end goal for all art (creative art in general) is to “make things for a human audience” furthermore, “to engage [that] audience.” Graham writes that “nearly all the greatest paintings are paintings of people, for example, because people are what people are interested in.” I both agree and disagree with his statements. I believe that although “hacking or creating software” mainly uses the computer as a medium for the creation and design, a lot of the software created has a deeply rooted mathematical and scientific basis behind it. I don’t believe painters and artists approach the creation of their art similar to those doing software engineering. That is not to say that software engineering cannot be interpreted as an art form but I don’t believe it is correct to consider engineers not as scientists.

NOC – Week 10 Assignment – Kevin Li

Instead of autonomous agents, I decided to code cellular automata (which is sort of related in topic). I made a Game of Life in Processing.

Screen Shot 2017-05-21 at 1.44.06 PM

The rules of the game are as follows.

The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, alive or dead, or “populated” or “unpopulated”. Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:

  1. Any live cell with fewer than two live neighbours dies, as if caused by underpopulation.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overpopulation.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

This was relatively easy to do in p5.js (actually less than 20 lines).

I based my sketch off of example code found from: https://p5js.org/examples/simulate-game-of-life.html

NOC – Week 9 Assignment – Kevin Li

I have done a lot of particle systems as assignments in this class (my midterm, particle engine, etc.) so I will try something simpler for this assignment.

Instead of the many shapes that we generated in class, I made a simple snow falling sketch with particle systems.

Screen Shot 2017-05-21 at 12.35.52 PM

I actually played around with depth and z-indexing (layers) to get a more realistic 3-D effect.

Screen Shot 2017-05-21 at 1.54.23 PM

 

 

NOC – Final Project Documentation – Kevin Li

Project Title
Metempsychosis
Project Video

Project Images
mov
Description of Development of the Project
This project was completed in approximately three weeks time. I began the project wishing to learn more about GLSL after seeing the incredible demos on ShaderToy as well as WebGL demos on Chrome Experiments that use shaders. GLSL is a programming language that executes directly on your GPU. These programs are called “shaders” – tiny computer programs that explain how to draw things. The big conceptual shift when considering shaders is that they run in parallel. Instead of looping sequentially through each pixel one-by-one, shaders are applied to each pixel simultaneously, thus taking advantage of the parallel architecture of the GPU.
This is very powerful and the basis to a concept called GPGPU. GPGPU is basically exploiting the parallel nature of the graphics card to do more than simply rendering to the screen, namely, to do computation that you would normally do on the CPU. While normally a GPU uses vertex and pixel processors performs shading on a mesh, GPGPU uses these processors to do calculations and simulations. We can perform GPGPU in concert with WebGL via a process called render-to-texture (RTT). In essence, the output of a shader can be a texture, and that texture can be input for another shader. If we can store these two or more textures (or bitmaps) inside memory or RAM (these are referred to as Frame Buffer Objects (FBOs for short)), then we can read and write to a buffer without affecting the parallel nature of the GPU. We also exploit the fact that while the Frame Buffer typically holds RGB(A) values, we could instead use it to store XYZ values. This allows us to simulate and store particle position data in fragment shaders as textures in the GPU rather than as objects in the CPU side. This allows for massive speed-ups due to parallelization.
I did a few days of experiments with learning about GLSL, picking up basic shader language and syntax, understanding the difference between vertex and fragment shaders, clip coordinates, coordinate systems, the model / view / projection matrix, and having a basic understanding of matrix translation, rotation, scaling. I wanted to understand just enough to be able to understand how to write syntax for shaders and how to implement the FBO simulations.
I rewrote and implemented parts of both @nicoptere (https://github.com/nicoptere/FBO) @cabbibo’s (https://github.com/cabbibo/PhysicsRenderer) and @zz85’s (https://threejs.org/examples/webgl_gpgpu_birds.html) FBO / GPGPU simulation / vertex / fragment shaders as well as the “ping-pong” texture flipping technique to get this result.
Screen Shot 2017-05-20 at 5.07.42 PM
Screen Shot 2017-05-20 at 5.07.47 PM
I added simplex noise and curl noise from Ashima Arts (https://github.com/ashima/webgl-noise).
Screen Shot 2017-05-20 at 5.07.54 PM
I then modeled a few simple shapes (plane, sphere, rose) with mathematical functions (parametric surfaces) as defined below as well as imported 3D .obj models from Three Dreams of Black (an interactive WEBGL video – http://www.ro.me/tech/) to place particles on. Particles were placed on the vertices of the 3D mesh.
The physics of the project is straightforward and consists of a continual gravitational attraction towards the defining shape (whether a shape or a mesh) and a repulsive force when disturbed with at a particular point (mouse on click). Forces are further controlled with decay applied on velocity and strength of attraction or repulsion. Curl noise can also be added to the velocity.
Resources
http://barradeau.com/blog/?p=621
http://www.lab4games.net/zz85/blog/2013/12/30/webgl-gpgpu-and-flocking-part-1/
http://www.lab4games.net/zz85/blog/2014/02/17/webgl-gpgpu-flocking-birds-part-ii-shaders/
http://www.lab4games.net/zz85/blog/2014/04/28/webgl-gpgpu-flocking-birds-the-3rd-movement/
https://github.com/nicoptere/FBO
http://www.hackbarth-gfx.com/2013/03/17/making-of-1-million-particles/
https://www.youtube.com/watch?v=HtF2qWKM_go

Szetela – Ex-Memory – Video Project Documentation

1) Project Title
Memory
2) Project image/screenshot
Screen-Shot-2017-04-27-at-11.15.48-PM-300x168 Screen-Shot-2017-04-27-at-11.16.11-PM-300x168 Screen-Shot-2017-04-27-at-11.16.38-PM-300x167 Screen-Shot-2017-04-27-at-11.17.49-PM-300x165 Screen-Shot-2017-04-27-at-11.17.09-PM-300x167 Screen-Shot-2017-04-27-at-11.17.19-PM-300x167 Screen-Shot-2017-04-27-at-11.18.05-PM-300x165 Screen-Shot-2017-04-28-at-12.04.54-AM-300x168 Screen Shot 2017-05-20 at 1.02.00 PM
3) Working link (please wait a minute to load, videos are not compressed yet)
 http://192.168.50.184/~kl2482/commlab/memory/
4) Group members and their roles/responsibilities in the development of the project
Sarah, Chloe, and I all contributed lots of time and effort in developing the storyline and ideation of the project. This took most of our time in the beginning and intermediate stages of the project. We went through many different iterations and adaptations to get to our final concept. We asked a lot of students and people about our ideas including Professor Szetela, Professor Moon, IMA fellow Jiwon. This project was a team effort, I believe the project was very successful because of our relentless effort to not give up on finding the right storyline. All three of us were into the “futuristic theme” and had watched almost all of the Black Mirror episodes and I think we bonded a lot over this. We met a lot to discuss and talk about the storyline, logic, camera angles, lighting, ambience, environment, the technical challenges, pitfalls, video, audio, and interaction.
5) Description of the project idea
The project takes place in the year 3001, a world in which memories are very sought after. The user experiences our project as a memory extractor and gets to see all the parts that go into extracting a memory from our subject.
6) Description of how the project navigation and user interaction works
The user is presented with a landing page that describes the project as well as a full-screen video of a subject entering a room with a chair. Immediately, the user is launched into an immersive experience. The room is dark except for the chair in the middle and it is very clear that the environment is not something usual. The user clicks begin and he is taken to a futuristic GUI. This GUI is what you see as a “memory collection operator.” You can click through the camera angles to see different shots of the subject, Jacob, as he is waiting for his turn. There is a heartbeat monitor as well as an information panel all adding to the immersive feel. In the center is an extract memory button, where the main extraction process begins. The extraction video (about a minute long) is played until when it finishes, the user sees the memory being taken and put into a wall of other similar memories that have been collected from the subject. The video is transformed into a photo and put into a 3D landscape of memories.
7) Description of development of the project (how did you make it)
The website is written in Javascript with the help of a few libraries for preloading, audio and video assets, transitions and tweening. The futuristic GUI is done in Adobe Illustrator, exported as an SVG with certain elements on the SVG having an ID (label) set on them in Illustrator to be able to tag and select in Javascript. We utilized an SVG manipulation and animation library called Snap SVG to speed up our process of playing around with these SVG elements. Some GUI elements are personally designed, others are taken from a “Futuristic Interface Builder Pack” that I bought from Envato Market. The main video is shot with a projection mapping done on the body (with the help of Microsoft Kinect and Processing) — the animations projected are custom built in Processing and projected onto Jacob’s body. Unfortunately, we did not have time to use MadMapper for a better mapping of the projection, instead we manually aligned the projector and projection to fit our needs. We filmed all of our video in the dance room, where the room was large and the black background would be far away so that any projection light would be much dimmer when hitting it. This allowed us to use Levels correction to darken the background post-processing. We used Adobe After Effects and Premiere for video editing. We did not do much post-processing in terms of effects but simply cut and sequenced the clips together, adding a slight warp effect on one of the projection animations. After it was sequenced, we added audio tracks to certain keypoints in the video. The memory extracted was an audio recording of Jacob and Sarah. We added many different audio tracks (~10) to give us the final video. After the video plays, the video fades into an interactive 3D wall of memories. One memory is added to the wall which is the actual photo of the audio memory played during the video. This was done in THREE.JS and CSS3DRenderer.
8) What you learned from the process
We learned a lot from this entire process. Almost everything was a new, learning process for us. We have never used projection mapping before, nor have any of us used Premiere or After Effects. Although we were stuck in the beginning in terms of ideation and whether the story made sense, we stuck to our gut that we did not want to simply compromise on the integrity of our story even though we were running out of time.

Szetela – Internet Art Documentation

Project Title: Internet Art
Project image/screenshot:
 painting (5)
 z2 Screen Shot 2017-05-07 at 1.09.45 AM Screen Shot 2017-05-07 at 1.25.11 AM Screen Shot 2017-05-07 at 1.45.24 AM Screen Shot 2017-05-07 at 11.42.00 AM Screen Shot 2017-05-13 at 6.38.30 PM Screen Shot 2017-05-13 at 6.39.31 PM Screen Shot 2017-05-15 at 4.34.43 PM Screen Shot 2017-05-15 at 6.19.18 PM Screen Shot 2017-05-15 at 6.28.09 PM Screen Shot 2017-05-18 at 12.38.41 PM
Working link: https://frozen-fjord-42141.herokuapp.com/canvas.html
Group members and their roles/responsibilities in the development of the project:
Both Sarah and I worked on the conceptual development and ideation of the project. We worked together on the coding aspect, although she was more familiar with the frontend part of development than the server-sided part. Sarah had really great ideas about the project in its conception. As she also had a background in doing painting and drawing, her experience and insight with watercolor was integral in the development of the project.
Description of the project idea
The project idea is simple. It is a watercolor canvas that can be painted with the mouse or with a mobile phone as the “brush.” We set out to simulate the effects of watercolor (water seeping into the paper) and dripping down the paper. The different tones of watercolor from a single paint stroke as well as the pressure of the stroke was also a point in which we tried to focus on.
Description of how the project navigation and user interaction works
The project navigation is also simple. You can draw paint strokes on the canvas with your mouse. The speed of which you move the mouse determines the pressure of the stroke. You can also draw paint strokes on the canvas with your a force-touch enabled phone. Unfortunately, we were unable to complete this portion (so all that you can do is “drip” strokes at a position on the canvas). Once the paint is on the canvas, you can move the phone by tilting it to move the paint.
Description of development of the project (how did you make it)
The development of the project began with Sarah and I brainstorming about internet art ideas. We settled on watercolor because we thought it was a technically challenging and visually beautiful project. The simulation of water is a very challenging topic in computer graphics and we did not have both the knowledge and background to accomplish this. Instead, we decided to “fake” the aesthetics and visual effects of the paint to simulate the application of water on paint.
We did this by having time as a function of how much “ink” is left in each paint stroke. Based on time, then, each paint stroke would have decreasing opacity and weight applied to it. Each paint stroke was then layered with 8-10 other paint strokes which were either lighter or darker in color on the same monochromatic palette. These strokes were simply “lines” in p5.js with strokeWeight() and a color/alpha applied to it. These lines were very short in length (previous to current position of mouse), which is how we were able to make the “gradient” and dynamic feel to each paint stroke (which consists of many small lines). The layered paintstrokes were blended additively together and small degree of randomness and noise was applied to each stroke to prevent the strokes (and blending) to be in one position (on top of each other). This really added to the feel of watercolor.
What you learned from the process
We began the project trying to challenge ourselves technically and I believe we accomplished most of our goals. However, in the beginning, we focused too much on the use of “cool technology” — the pressure sensor and accelerometer, sockets, and real-time connections with the iPhone because we thought utilizing these sensors would provide us with better control of the brush or added “randomness” and user experience. In fact, it was the very opposite. We found out that the mouse turned out to be a much better and easier tool to use (although we began implementing mouse functions the day before the project was due). We also found that server-sided socket connections were not fast enough to do real time drawing with the iPhone as a controller unless the server was locally hosted (or was really fast).

Week 6: Response to Remix & Foley Videos (Szetela)

Foley: This video was very interesting and eye-opening to me! I did not know that this term “Foley” was used in industry to describe audio post-processing for movies and film. I have always been interested in audio and I really resonated when the mixer in the video described it as being very “satisfying” when finding the right audio for the right scene. Although after watching this video, I can’t also help but feel a little cheated. I understand it is very hard to record natural sound when shooting a movie because there are a lot of scenes that are not dialogue yet produce sounds that are too quiet or not exaggerated enough for us to tell it apart. This also brings up another interesting point which is that humans might be able to differentiate between sounds and pick up on nuances and subtle differences which are then amplified in our heads. This “amplification” is also partly why we need “Foley” sounds, to bring up the uniqueness of each sound which is like a sound footprint which we identify in our heads, but not necessarily when we are just listening with our ears.

Remix: I found this TED talk interesting and he really tackled a topic which I think a lot of people have on their minds but do not say out loud. I’ve actually had read a book titled “How To Steal Like An Artist” in which he describes a few key things about artists and the creation of new art. Nothing is original, every new idea is a remix of previous ideas. In this, he also says something that I found very inspiring. It is “Don’t wait until you know who you are to start making things.” I understand this as, instead of sitting around trying to figure out how to be original, how to be creative, how to take 100% ownership of your own work, just start being creative, whether its by faking it, copying, remixing, or collecting ideas. You will learn things from these ideas, and they will influence you as an artist to turn into something new (remixed) that will belong to you.

Week 9: Response to “The Work of Art in the Age of Mechanical Reproduction” by Walter Benjamin (Szetela)

Walter Benjamin argues that technology is changing art. He seems to want to say that the mechanical reproduction is not equivalent to art in the traditional sense because it lacks unique existences and histories. Mechanical reproductions have no particular history and therefore, as art it is not unique. He writes that “the uniqueness of a work of art is inseparable from its being imbedded in the fabric of tradition.” Mechanical reproductions do not focus on the ritual and history of an object. These reproductions also allow it to be viewed by mass audiences, individual reactions were constrained and formed by the mass audience.

I am not sure if I agree with his stance. I believe that art is always evolving and ever changing in form and function. We are entering a period of time where mechanical reproduction is the medium through which art is produced, and where the purpose of art is less for expression than for the “politicization” or reflection of the environment, then it fully serves the purpose of capturing the time and place and “aura” of the piece of art.