Kinetic Interfaces – Intel RealSense for Web w/ Node.JS, Midterm (Kevin Li)

Realsense for Web and p5.js

 

Motivations

I wanted to explore Depth Sensing cameras and technology. I knew that regular cameras were able to output RGB images and we can use openCV or machine learning to process the image to learn basic features of a scene. For example, we can do blob or contour detection which allows us to do hand or body tracking. We could also do color tracking, optical flow, background subtraction in openCV. We could apply machine learning models to do face detection, tracking and recognition. Recently, pose or skeletal estimation has also been made possible (posenet, openpose).

However, even with openCV and ML, grasping dimensions (or depth) is a very difficult problem. This is where depth sensors come in.

 

Pre-Work Resources

I gathered a variety of sources and material to read before I began to do experiments. The links below constitute some of my research and are also some of the more interesting readings I thought were very relevant.

Articles on pose estimation or detection using ML:

https://medium.com/@samim/human-pose-detection-51268e95ddc2

https://github.com/CMU-Perceptual-Computing-Lab/openpose

https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5

On Kinect:

http://pages.cs.wisc.edu/~ahmad/kinect.pdf (great in-depth on how Kinect works)

On Depth Sensing vs Machine Learning:

https://blog.cometlabs.io/depth-sensors-are-the-key-to-unlocking-next-level-computer-vision-applications-3499533d3246 (great article!)

On Stereo Vision Depth Sensing:

Building and calibrating a stereo camera with OpenCV (<50€)

https://github.com/IntelRealSense/librealsense/blob/master/doc/depth-from-stereo.md

On Using Intel Realsense API and more CV resources:

https://github.com/IntelRealSense/librealsense/wiki/API-How-To

https://github.com/jbhuang0604/awesome-computer-vision

 

Research Summary

I learned through while cutting edge machine learning models can provide near real-time pose estimation, models are typically only trained to do the thing they are good at, such as detecting a pose. Furthermore, a big problem is energy consumption and the requirement of fast graphics processing units.

Quality depth information, however, is much more raw in nature, and can be make background removal, blob detection, point cloud visualization, scene measurement and reconstruction, as well as many other tasks more easy and fun.

They do this by adding a new channel of information, a depth (D), for every pixel which comprises of a depth map.

There are many different types of depth sensors such as structured light, infrared stereo vision, time-of-flight sensors and this article gives a really well written overview of all of them.

https://blog.cometlabs.io/depth-sensors-are-the-key-to-unlocking-next-level-computer-vision-applications-3499533d3246

All in all, each has specific advantages and drawbacks. Through this article, I knew the Kinect used structured light (http://pages.cs.wisc.edu/~ahmad/kinect.pdf) and I generally knew how the Kinect worked as well as its depth quality from having done previous experiments with it. I wanted to explore a new much smaller (runs on USB), depth sensor that uses a method known as infrared stereo-vision (which is inspired by our human vision system) to derive a depth map. It relies on two cameras and calculates depth by estimating disparities between matching key-points in the left and right images.

I knew the Realsense library had an open source SDK (https://github.com/IntelRealSense/librealsense), however it is written in C++ which means its not the easiest to get started with, to compile, and to document. But recently, they’ve released a NodeJS wrapper which I hope to use to make things easier for me. One of my goals is to figure out how to use the library but also see if I could make it easier to get started with a more familiar drawing library that we know or use.

 

Process

Hour 1 – 4: Getting Intel RealSense Set Up, Downloading and Installing RealSense Viewer, Installing Node-LibRealSense library bindings and C++ LibRealSense SDK, Playing Around With Different Configuration Settings in Viewer

Hour 5: Opening and Running Node Example Code, I see a somewhat complicated example of being able to send RealSense data through websockets, seems promising and I want to try to build my own.

Hour 6: Looking at different frontend libraries or frameworks (React, Electron) before deciding to just plunge in and write some code.

https://gist.github.com/polarizing/4463aacc88c58a9878cb180ad838c777

I’m able to open a context, look through available devices and sensors, get a specific sensor based on the name either “Stereo Module” or “RGB Camera”. Then I can get all the stream profiles for that sensor (there are a lot of profiles, depending on fps, resolution, and type — infrared or depth), but the most basic one that I want is a depth stream of resolution 1280*720 and at 30fps.

I can open a connection to the sensor by .open() which opens the subdevice for exclusive access.

Hour 7: Lots of progress! I canstart capturing frames by calling .start() and providing a callback function. A DepthFrame object is passed to this callback every frame which consists of the depth data, a timestamp, and a frame count number. I can then use the Colorizer class that comes with the Node RealSense library to visualize the depth data by tranforming the data into RGB8 format. This has a problem though, as the depth data is 1280*720 = 921600. However, this data is then stored as RGB8 which is 921600 * 3 or 2764800 or 2.76 MB. At 30 frames per second, this would be equivalent to nearly 83MB of data / second! Probably way too much for streaming anything between applications. We can compress this using a fast image compression library called Sharp. We can get quite good results which this. Setting our image quality to 10, we get 23kb per frame or 690 kb / s. Setting our image quality to 25, gets us 49kb per frame or 1.5MB a second (which is quite reasonable). Even at image quality 50, which is 76kb per frame, we can average 2.2MB / s. From this, I estimate it is quite reasonable to stream the depth data within local applications and has potential to even stream over the Internet. I might try that next.

https://gist.github.com/polarizing/12aa720526c91596883da541e584e018

Current Backend Diagram

[Insert Image]

Hour 8 – 9: More progress. I got stuck in a few spots that were sticky situations but ended up getting through it and now we have a quick and dirty working implementation of RealSense through WebSockets. I connected it through using the WebSockets Node library (ws).

Challenges Here + Blob

Hour 8 (backend w/ websockets): https://gist.github.com/polarizing/be9873a9a07d5df2155e7df436dc282d

Hour 9 (connecting w/ p5.js): https://editor.p5js.org/polarizing/sketches/Hyoci5V37

The video below show the color depth image being processed directly in p5.js using a very simple color matching algorithm (showing blue pixels) which gives us a rudimentary depth thresholding technique (this would be much easier to do directly in Node since we have the exact depth values, which we will try later). The second video shows another rudimentary technique of averaging the blue color pixels to get the center mass of the person in the video. Of course, this is all made much easier since we have the depth color image from RealSense. This would not be very possible, at least in a frontend library like p5.js, without sending depth camera information across the network since most computer vision libraries still exist only for backend. This introduces some new possibilities for creative projects using depth camera + frontend creative coding libraries, especially since the javascript is so adept at real-time networking and interfacing.

Before moving on to play with more examples of depth, specifically, point clouds, depth thresholding, and integrating openCV to do more fun stuff, and figuring out the best way to interface with this, I want to see if I can get the depth data sent through to Processing as well.

Hour 10 (Processing): I spent this hour researching what ways we can send blob (binary large object blobs) over network and settled on Websockets or OSC and if Java can actually decipher blob objects. I decided to move on instead of keep working on this part.

Hour 11 and Hour 12

I was met with a few challenges. One of the main challenges was struggling with async vs syncronous frame-polling. I did not know that the NodeJS wrapper had two different types of calls to poll for frames. One of which was an synchronous thread-blocking version — pipeline.waitForFrames() and the other which was a asynchronous — pipeline.pollForFrames(). In any case, the second async version is what we want but we would need to implement an event loop timer (setInterval or preferably something better like a draw() function) that can call pollForFrames every 30 seconds.

Hour 13 and Hour 14

Streaming raw depth data, receiving in p5 and processing image texture as points to render point cloud, lots of problems here

I wanted to stream the raw depth data because as of now, I have color depth data but having to do post-processing on it is a pain. I wanted to send the depth data as an image with depth values encoded between 0 – 255 as a grayscale image or as raw values which we can convert with a depth scale. This would be similar to how the Kinect operates.

I thought it would be pretty simple, just get the depth scale of the sensor and multiply by each raw depth data at each pixel and write back to the data buffer.

However, I was stuck for quite a long time because I was sending nonsensical values over websockets which resulted in a very weird visualization in p5. I wish I took a video of this. Anyways, I believe I was doing something wrong with the depth scale and my understanding of how the raw values worked. I decided to go to sleep and think about it the next day.

Hour 15 and 16

When I woke up, I realized something simple that I overlooked. I realized that I did not need to convert the raw values as I remembered the Realsense Viewer had an option to view the depth data as a white-to-black color scheme. I realized I could toggle the color scheme configuration when I was calling the colorizer() function to convert the depth map to an RGB image.

If I did: this.colorizer.setOption(rs.option.OPTION_COLOR_SCHEME, 2)

I could set the color-scheme to a already-mapped 0-255 grayscale image. Then, it would be the same process as sending the color image over. The simplicity of this approach was unbelievable as right after this realization and implementation, I tried it, and the results were immediate. I was able to send the depth image over p5 and I could p5 to sample the depth image to render a point cloud (see below video)

Hour 17 and 18

I was able to link multiple computers to receive the depth information being broadcast from the central server (laptop). I also took a demo video in a taxi as I realized that I was not limited in using the Realsense since it was portable and powered off USB. I simply ran the p5 sketch and the local server and it works!

Taxi Test Video:

IMG_9288-1

Hour 19 and 20

I used the remaining time to work on a quick presentation on my findings and work.

https://docs.google.com/presentation/d/1sAfK0ugYRg4xGDMlughxa2bBDK94lgAUkBgh6LAyLNI/edit?usp=sharing

Conclusions and Future Work

After doing this exploration, I think there is potential in further developing this into a more fully-fledged “Web Realsense” library similar to how the Kinectron (https://github.com/kinectron/kinectron) works for the Kinect.

Working With Electrons – Ferrofluid Heartbeat Kinetic Sculpture (Kevin Li)

What I really wanted to do in my kinetic sculpture was get started experimenting with ferrofluid as I started to gain a better understanding of what I want to do for my midterm and final projects.

So, I used this assignment as a launching pad to get some experience with playing around with ferrofluid to create something “kinetic” or moving. Originally, I decided that I wanted to work with ferrofluid but I did not really think about why. So when I brainstormed the sort of interaction one can have, something simple came to mind, which was have the ferrofluid beat to your heart rate.

I actually spent a few days thinking about this idea and thought it would be quite immersive as a sculpture / installation type work where there would be a flat plate of black ferrofluid (or even shaped as a black heart). A user would put their finger on a heart-beat sensor which is quite reliable and the ferrofluid would pulse, attract, and spike up when there would be a pulse. There would also be headphones on a stand nearby that would play a sound of heartbeat and a sound of the ferrofluid making it an audio-visual sculpture.

I talked with Professor Moon and some friends about this idea who gave me the advice that while it might be interesting, the idea itself of using heartbeat was a little dull (it has been used in art installations many years ago) and two-dimensional (as just looking at the sculpture with the sensor and headphones you can probably assume its meaning and intention). I still it would be a great effect but I wanted to improve my idea for the better so I scrapped this idea in general.

However, I decided to just take this idea and try it out as a simple kinetic sculpture for this assignment, just for fun. It would not be designed or crafted in any way, I just wanted to see how I could move and shape ferrofluid in some way using electricity!

So I went and bought lots of materials (ferrofluid produced by Ferrotec, by the way, is quite expensive in China). I bought some pipettes, glass petri dishes.

And … a complete disaster. Ferrofluid, being made of nano particles stains everything it touches. The solution I used was water which causes oily black stains on the glass and particles to float in the water. I suspect the “glass” petri dish I bought was also unpure.

I decided to go with a consumer ferrofluid toy that had treated glass, solution liquid, and ferrofluid all ready to use.

I borrowed an electromagnet breakout board which was very low-powered from the studio before realizing it was basically useless and bought my own. Surprise! It turned out just as useless as I bought a small electromagnet P20/15 (width and height) that had a pulling force of 2.5kg. You can see the results in the picture above where the ferrofluid had a very small magnetic attractive force that made it look more like a sorry blob. Magnetic forces decay exponentially, so the thickness of the glass really diminished the effect of the small electromagnet.

At this time, I wasn’t too happy with how it looked, so I tried some other alternatives to electromagnets. I thought about a more analog model where a permanent magnet was attached to the end of a solenoid and the push-pull force of the solenoid would basically make the permanent magnet closer or farther from the glass.

I made a prototype with a pen attached to the end of the solenoid with the permanent neodymium magnet on top. The reason why the magnet could not be attached directly to the solenoid was that the solenoid could not pull up as the neodymium magnet was so strong.

This is an alternative prototype with two solenoids and a metal screw. Again, the neodymium magnets were attached on top.

Now, this worked pretty well, but again I was kind of disappointed at the results. First off, the distance that the solenoid could push upwards was limited (10mm). This meant that there would always be an attractive force from both magnets affecting the fluid even if the solenoids were “off.” I researched larger distances and realized that even long-pull solenoids had a maximum reach of around 30mm – 60mm and they could only operated for around 5-10 seconds as the coil would heat up tremendously.  That was a bummer. Another problem was that the permanent magnets had such a strong force that it would pull both structures together at times and make it very hard to control. It would work fine when there was the neodymium magnets were opposing each other but when they had the same direction, without a far enough distance from each other, the two metal screws would essentially snap together. This could probably be fixed with a stronger structure or different mechanics but most importantly, the problem was that I could not actually efficiently control the attractive speed of the fluid. When the solenoid shot up, the fluid would snap to the neodymium magnet resulting in a rather mechanical feeling rather than the fluidity and natural organic motion of the fluid itself. So, how can we have more control over the distance the solenoid traveled? Probably something like a linear actuator would work but those are prohibitively expensive.]

So back to the electromagnet idea. I immediately went out and bought a larger electromagnet. This time, I went for one that had a pulling force of 18kg and was P34/18 and I was surprised, there was a much better result!

Spikes, finally and some resemblance of a heartbeat! Immediately, following this I realized that an electromagnet is still probably the best way to achieve the aesthetic I want and so I went and bought two more even stronger electromagnets to experiment with (P34/25 and P45/30). The next steps are to hook up several electromagnets in parallel and use Arduino PWM to control the intensity and current. I also bought some more prototyping materials and also need to figure out the mysterious suspension fluid!

 

 

Working With Electrons – Lab Report: Indirect Measurements (Kevin Li)

Title of Lab

Indirect Measurements

Introduction

Statement of Purpose and Hypothesis

The goal of this lab is to use different techniques to measure electricity, such as current and power. Sometimes it is difficult to measure minute or indirect measurements, but using Kirchhoff’s, Ohm’s, and Faraday’s laws, we can estimate results. This will also help us bridge the connection between formulas and physical experiments that we see in lab and strengthen our understanding of the relationship between current, voltage, and power.

Description

Materials

  • Current and Voltage Meter
  • AWG26 Wire
  • Voltmeter and Ammeter
  • White LED
  • Two 1.5V Batteries

Procedures

  1. For the first experiment, we want to use a current meter and a voltage meter to calculate the resistance of an AWG26 wire with a current of 1mA, 1A and 2A (after 10 seconds). We also want to record a diagram of the connections and data collected.
  2. For the second experiment, we want to determine the current across an LED connected to two 1.5V batteries using a voltmeter and an ammeter.
  3. For the third experiment, we want to create a circuit that consumes 1 watt of power and draw the schematic and calculations / measurements.

Data / Observations and Results

For the first experiment, we wanted to measure and calculate the resistance of the copper wire. I researched a bit on resistivity and found that copper has very small resistivity, at 1.7 * 10^-8 resistivity per ohm metres, which is next to nothing which means that it is extremely good at conducting electricity. We can calculate the resistance of the wire using a formula

were 1.7*10^-8 is the resistivity of copper, 0.1 is the length of the AWG26 wire we used, and 0.13*10^-6 is the cross-sectional area of the wire (AWG26 is 0.13mm^2). This gives us a resistance of 0.0132 ohms. In our observations, we adjusted the voltage on the power supply to be 0.11V which induced 1.16A. This meant the resistance of the wire, R = V / I was 0.11V / 1.16A or 0.08 ohms. We adjusted the voltage on our power supply to be 0.25V and was able to induce 2.00A, 0.25V / 2.00A is 0.125 ohms. This is greater than my calculations using the formula but there could have been errors in my calculations or it could be possible the wire we used was not actually 26AWG.

For the second experiment, we used two 1.5V batteries to power a red LED. The two batteries did not burn the LED as it was actually emitting only 2.3V due to having been used before. The current reading was 72.5mA. As you can see in the picture and diagram, we had to use two machines to measure this. Alternatively, I think we can measure the current by having a resistor connected to the power source and LED positive and negative terminals and measure the voltage drop across the LED. Then, subtract that voltage from the supply voltage (power), we can apply Ohm’s law to find the current across the LEDs.

Diagram: 

In our final experiment, we were able to create a 1W power by using a 220ohm resistor at 14V. Power is determined by the formula P = V^2 / R, in our case, 0.88 = 14^2 / 220 ohms. We were able to burn the resistor at around 22V, which shows the maximum power rating of the resistor was below 1W or so (probably lower since the resistor can got hot way before 1W).

Working With Electrons – Using Magnetic Fields (Kevin Li)

Title of Lab

Using Magnetic Fields

Introduction

Statement of Purpose and Hypothesis

The goal of this lab is to get a working knowledge of electromagnetism. We will design, test, and draw a conclusion on a few experiments that reproduce a magnetic field, as well as examine how to build a simple electric motor that turns electricity into motion by exploiting electromagnetic induction.

Description

Materials

  • AAA batteries
  • Battery power pack
  • Enameled wire
  • Magnet
  • File / Knife and Lighter
  • Helping Hands
  • Bar Magnets
  • Thread or String
  • Multimeter

Procedures

  1. For experiment one, we make an instrument that can detect magnetic fields by hanging a magnet from a string. We take another magnet and push it closer to the hanging magnet. The hanging magnet should rotate toward the magnet we are holding, allowing our hanging magnet to detect a magnetic field nearby.
  2. For experiment two, we induce an electrical current through enameled wire and place it close to our hanging magnet. We also measure the current going through the wire using a multimeter.
  3. For experiment three, we coil the wire as a loop and add many turns to it. We measure the current and place it close to our hanging magnet.
  4. We build a simple DC motor that uses the principles found in experiment two and three.

Data / Observations and Results

Our observations in experiment 1 showed that holding a magnet close to a hanging magnet will cause some type of magnetic force and become attracted or repelled to each other.

We connected two 1.5V batteries in series and a 1K ohm resistor and measured the current. As expected, the multimeter read a number very close to 3mA as I = V / R, 3mA = 3V / 1K. The multimeter read 2.82 milliamps of current. This could be because there is a change in the internal resistance of the battery cell which caused the output voltage to drop. Our observations when bringing this circuit near the hanging magnet was that there was some slight movement from the magnet showing an active magnetic field.

We then coiled the wire into a loop and added a few turns. We did not measure the current for this wire, although we probably should have. Our observation was that bringing this circuit near the hanging magnet caused a larger disturbance and movement produced by the magnetic field.

We used a DIY kit for the last experiment and created a motor by using the solenoid-like core they provided in the kit. We also experimented without using enameled wire, which worked just the same. The only difference was that the regular wire was slightly bulkier. It would be interesting to find out how the number of turns relates to how fast it would turn.

Video of Motor

IMG_1098

Working With Electrons – Kinematic Sculptures (Kevin Li)

Compare & contrast kinematic sculptures from two different artists.

These are two projects I saw a year ago that captured my attention.

The first one, linked above, is created by an artist called John Edmark. The sculptures are called “Blooms” and they are 3D-printed sculptures that rotate at a very fast speed. They rotate in sync to a strobe light that flashes at an extremely fast hertz so that our eyes do not actually see the flashes. Instead, we just see steady light. However, the sculpture turns 137.5 degrees, the golden angle, with every flash. In this way, we see patterns and animations created from the spiral sculpture and design. I worked on a similar project before where an electromagnetic was turned on and off at a very high frequency (say 90HZ) and the light strobe was slightly faster, say 90.1HZ. The electromagnet will vibrate an item at the given frequency, which we as humans won’t be able to see. However, the light strobe will “freeze-frame” and cause the vibration to appear in slow motion, producing a fantastic effect.

This is another project that is a more physical installation. It is called the Diffusion Choir and is made up of hundred of small origami forms that shrink and expand which together form large scale shapes. Instead, tiny motors push and pull a very durable type of paper manufactured by Dupont called Tyvek. It is then powered by software that creates a choreography that opens and closes the sculptures like a flock of birds.

Working With Electrons: Geomantic Device (Kevin Li)

Something that always interested me was palm reading (or called palmistry). I remember my parents would use to tell me about a “life-line” and tell me how I would live a long life because the line was long. Later on, I would find out that there were many other lines, regarding the heart or head line as well as fate lines. I’ve also gotten a psychic reading from a palm reader before and everything she said was “true” but I also know from research of something called the Barnum effect where individuals give high accuracy ratings to descriptions of personality that sounds specific to them but are in fact very vague and general. For example, I remember the palm reader reading my palm and saying something along the lines that I would at times be extroverted and sociable and while other times be introverted and reserved. This applies to me and especially at the moment I felt it was true but looking back, most people can relate to this circumstance as very few people are always sociable or always introverted in their daily lives. But at the same time, I feel like palm readings and various divination techniques have some sort of therapeutic self-reflective effective. After all, it is not that many times in our busy lives that we can set down and just look at things without questioning them too much. 

Working With Electrons: Lab Report 1 – Measuring (Kevin Li)

Title of Lab

Measuring Magnetism

Introduction

Statement of Purpose and Hypothesis

The goal of this lab is to complete a simple experiment with magnets to get a working knowledge of laboratory work. We will design, test, and draw a conclusion on a machine that uses magnetism. In my case, I chose to work with a experiment found here.

http://cse.ssl.berkeley.edu/artemis/pdf/measure_magnetism.pdf

This purpose of this experiment is to use a compass to try and measure magnetic fields of a bar magnet and its directions. My hypothesis is that a compass needle which points North will be attracted to the South pole of the bar magnet and the opposite side of the needle which points South will be attracted to the North pole of the bar magnet.

Background Information

This was a very interesting video by physicist Richard Feynman on the question why magnets repel each other because I was very interested in this after doing this lab. The short answer is that it is extremely hard (if not impossible) to answer because magnetism is a fundamental force and that nobody really knows why it works but we know how it works (how particles and forces interact with each other). However, through experiments, we can begin to understand the behavior and phenomenon of magnets. I also learned through some research that, in simple terms, electrons moving around cause magnetism. From an atomic level, atoms have two states of angular momentum (spin) that describe the motion. Opposite spins will cancel each other out and magnetism occurs when an electron is not cancelled out.

Description

Materials

  • Magnetic Compass
  • Bar Magnet
  • Iron Fillings
  • Thin Sheet of Paper

Procedures

  1. With two strong bar magnets, experiment and record some observations about their attraction and repulsion.
  2. Take a magnetic compass and move it around the bar magnet. Record what happens.
  3. Place the bar magnet and the compass on a sheet of paper and trace the magnetic force field direction with the compass. To make the tracings, draw a dot somewhere near the magnet and place the center of the compass over the dot. Draw a dot at the location of the arrow head of the compass needle. Move the compass to this new dot and repeat this process. Draw lines connecting the dots with arrows indicating the direction the compass points. Continue doing this until the line meets the magnet. Pick another spot near the magnet and repeat.
  4. Take iron fillings a sheet of thin paper. Place paper on top of bar magnet and lightly sprinkle the iron filings evenly over the paper and give the paper some shakes so as to make the fillings align with the magnetic field. Record observations.

Data and Observations

Conclusion

I observed that when two bar magnets with like poles are put opposing each other, the magnets will have a repulsive force while if the two bar magnets with unlike poles are put opposing each other, the magnets will have an attractive force. This shows the theory that “opposites attract” for magnetic forces. When I traced the compass needle to create the magnetic vector field, I was able to see the dipole magnetic field where the magnetic forces would wrap around from the North pole and eventually enter the south pole. The iron fillings showed a very similar diagram to the traced out drawing. Since iron is ferromagnetic, the magnetic field will the iron filling to be like a little magnet. The south pole will attract opposites (particles with north poles) and this will cause a repeated chain of fillings which will be focusing in the direction of the magnetic field.

Week 13: Greene Response – Szetela

Rachel Greene’s Internet Art shows how artists have employ online technologies (websites in general) to create new forms of art, and to move into fields normally beyond what one would deem the “artistic realm.”

I think what is really interesting about the advent of the Internet is that the Internet Art feels quite different than artwork that has been created in the past because it has the opportunity to be created and experienced by so many people. Many people that are not “artists,” including me, have used the Internet and technology to create and explore art. The Internet has made many people that would have never considered themselves an a artist or a creator. The Internet has also provided lots of opportunity to view others artwork, to become influenced by others and to continually expand the domain of knowledge that goes into creating art. Technology in general has also allowed art to take on many new forms, whether its interactive, video, audio, even collaborative in real-time (reddit’s pixel art – the place).