RAPS – BirthDeath

BirthDeath

Partners: Maxwell, Vivian

Project Abstract

A live audiovisual performance in collaboration with Maxwell and Vivian. This performance explores the limits of the human body.

Project Description

BirthDeath is about the limits of the human body, and pushing those limits even further, until the body collapses. Maxwell and I created a realtime audiovisual performance following this concept.

With this performance, our purpose is simply to gradually provoke an overwhelming feeling in the audience and even increase their heart rate as the overall rate and feel of our piece speeds up. This performance is not necessarily meant to make you reflect on anything, it is just a sensorial experience.

Our inspirations to create this piece came from our interest in working with dance performance and the body. Maxwell and I have both danced and we really wanted to explore with implementing the body into our audiovisual performance. We originally wanted to use a heartbeat monitor and an accelerometer to modify some values in our Max patch, but since this was too complicated for the little time we had, we decided not to use it. However, we kept the dance performance.

Perspective & Context

Our performances fits into the historical context of visual music and abstract film in the sense that we really wanted to create a correlation between the sound and the visuals although not all of the audio factors were modifying the visuals. Our communication during the performance was essential.

I think that nowadays, because of our constant need to maximize time and productivity, we push our bodies with the last bits of energy we have everyday. We forget that our bodies have a limit and act as though we are invincible.

Development & Technical Implementation

From the beginning, Maxwell and I had a clear idea that they were going to work on the audio and I would work on the visuals. Maxwell created the audio patch alone, but in my case, I found that working on my patch while Maxwell played around with the audio served as a guide for me to create the visuals.

Part of the inspiration for this piece came from Maxwell’s and my interest in using sensors in the performance. Thus, we also wanted to implement Arduino in our piece so that we could use sensors such as a heartbeat monitor and an accelerometer. We did research on different types of heartbeat monitors but the only ones available to us were not reliable at all and were quite complicated to use. We considered using different buying a nicer sensor, but still, we did not know how to use it and we did not have enough time to make it work. Thus, we decided to only have the accelerometer. With Eric’s help, we got a patch that could send Arduino data to Max, so getting the accelerometer values was not too hard. However, attaching the accelerometer to Maxwell with a bluetooth Arduino did not seem to be very reliable either, so we officially decided to leave our idea of using sensors.

Instead, since I still wanted to show a correlation between the heartbeat and the visuals, I made the amplitude of Maxwell’s audio determine the size of a 3D model of a heart, and the redness of the screen in the beginning of the piece. This is essentially what I wanted to use the sensors for anyways, but this was definitely a much better and faster way of going about it.

As to the rest of the visual components, I originally meant to generate all of the visuals in Max. However, since I do not really know how to create graphics in Max, I decided to use screen recordings of sketches that I had previously created in Processing. In the patch, I switched between 4 videos, one of a red background, another of the 3D model of the heart, and the two other ones for the videos of my Processing sketches. I used functions such as rotation, zoom, scramble, multiplier, etc. When it came to modifying the visuals live, the MidiMix was fundamental to the success of the piece. I cannot imagine having the same results without it. It really made all the values easier to access and to alter.

Overall, we had two different patches, one for the audio, and one for the visuals. This means that we used two different laptops in the performance, and they interacted through our own improvisation and through the amplitude of the sound.

Performance

The performance was the first time we all ran through the piece. It went way better than what we expected. I was terrified because I was not sure if it was going to go well and I did not know when the performance ended, so much of it was improvising on the spot and trying to make the visuals fit the sound and the dance.

In terms of what could go better, Maxwell kept walking in and out of stage to work on the audio, which we realized was not a great idea because it did not add anything to the piece and in fact was a bit distracting. So we decided to cut the dance off of the performance and only focus on the audio and visuals.

Even though Maxwell and I made the greatest contribution to the project, Vivian was very helpful during the performance because if we wanted to keep the dance, someone had to control the sound. So Vivian took that role.

Maxwell and I had the opportunity to performance in Miki’s show at Extra Time Cafe & Lounge. Here is a picture of us during the performance.

Conclusion

Overall I am very happy with how this project turned out. At first it seemed a bit chaotic because we did not really know in what direction to go. But we ended up figuring it out. Working with Maxwell was great, they did an amazing job with the sound, which really helped me develop my part of the project. And Vivian was very helpful during the performance because she was able to control the audio while Maxwell was dancing. I would have not been able to control the sound and the visuals at the same time. I really enjoyed doing this project and hope to create more live audiovisual performances in the future.

Being able to performance at Extra Time Cafe & Lounge was an amazing experience.

DSFA – REFLEXION

REFLEXION

This 3D animation takes place inside the bathroom of a nightclub, where a girl is high on MDMA and sees herself in the mirror. Her reflection in the mirror shows her jaw moving, eyes rolling, and slowing dancing to the music until she hears a girl screaming, which pulls her out of that trance moment and walks out of the scene. It is not know whether she has been drugged or decided to take the drug herself, but this is not the matter of importance. This scene is a reflection with many questions opened for the audience to resolve. I want to express the mystic experience of drugs which can be both scary and seductive.

This project was 3D modeled in Maya and Mudbox, and the end sequence was compiled and edited in After Effects.

After fixing and re-texturing my 3D model, I finally started to create the expressions for my animation. Here you can see the final version of my model. I also added painted make-up on its texture for the club scene.

This are some of the expressions that I sculpted in Mudbox:

 

After modeling the expressions in Mudbox, I sent my model to Maya, where I continued by rigging the neck and head and animating it with the facial expressions. As to the lighting, I decided to add one spotlight and one pointlight to my scene. When it came to rendering, I rendered the view from three different cameras. One for the front (the mirror reflection), one for the back view, and the last one for a close up of the mouth and eyes. Below, you can see some of the camera shots:

 

 

In After Effects, I added several different effects to make the scene more psychedelic in a sense and to finalize. Some of these effects include Echo, Gaussian blur, Tint, Opacity Flash, etc. I also used masks to create the broken mirror illusion. For the setting, I added some images and textures such as the bathroom image, a texture of stains on glass, broken glass, and a reflection to make the mirror more realistic.

Some of the audio for this piece was downloaded from Freesound.org and edited in Audacity. The main track of this piece is Sun My Sweet Sun by Konstantin Sibold. I also added an audio sample of people talking, of a creaky door, of a door closing, and of a girl screaming to create a more realistic ambiance of the scene. I added a Low Pass Filter to my main audio track in order to create the sensation as though the sound was coming from outside of the bathroom in the club.

These are some shots of my sequence.

Overall, this was probably one of my favorite projects I have done. I feel like I have learn a lot doing this project and would love to continue it further in the future.

IMD: Remarkable Architects

Remarkable Architects

 

Partners: Kefan Xu and Ellen Yang

In this project we created a motion design sequence where we presented three remarkable architects from The Pritzker Architecture Prize. Kefan did I.M. Pei, Ellen did Wang Shu, and I did RCR Arquitectes. We each presented two of their architectural works. We decided to follow Kefan’s design proposal, which consists of simple black and white images along side with black lines following the shape of the architectures. He got this inspiration this apple commercial:

For the typefont, I proposed using Big Caslon ad we all agreed to use it, as it is very elegant and suits our overall design. Here is the specimen for Big Caslon:

The work loads were divided based on preferences. Kefan created the title sequence, I created the end credits, and Ellen composed everything together.

For my sequence of architects, I chose RCR Arquitectes. RCR Arquitectes is a colective from Spain and they have designed building in many parts of Europe. For this piece, I chose their Bell-Lloc Winery, in Spain, and the Soulages Museum, in France.

 

This project was done in After Effects. In my sequence, I used fade in/out, the enlargement of the components of each frame, the black lines, revealing the images slowly, and the letters in the background to make the transitions and the overall sequence very smooth and elegant. The letters in the background were chosen based on the shape of the buildings and on their name. Thus, for instance, for the Soulages Museum, I chose the “S” for “Soulages” and aligned it with a structure in the middle of the image. ou can see this in the images shown below:

For the end credits I decided to keep the simple style of just black lines to support the text. Here are some images of the end credits:

 

 

Here are both my sequence of RCR Arquitectes and the end credits for our project:

RAPS: Live Cinema – Reflection

The first difference I recognized between VJing and live cinema is the importance that is given to the visuals. In VJing, the visuals become part of the background for a DJ performance in a club scene, giving very little importance to the VJ’s work. However, in live cinema, the main point of attention is the visual work, as it is performed in a more professional setting where the focus is the visual artist. Live cinema also has a greater sense of storytelling or concept which VJ generally lacks. Another difference which I found interesting is “the two-way, instantaneous feedback between the creator and the public” (83) as it is mentioned in the reading. Furthermore, another difference discussed is that VJ has more commercial purposes compared to live cinema’s artistic approach.
Live audiovisual performance is a combination of sound and image that is improvised. It’s style is not limited and concrete and as the author says, the term refers to “a generic umbrella that extends to al manner of audiovisual performative expressions” among which you can find VJing and live cinema.

RAPS: VJing Response

VJing Response
As the reading explains, VJing is performative because although a VJ has a selection of sequences, VJing requires live manipulation of the video. Thus, the presence of the VJ is a fundamental component of this practice. This means that it is as ephemeral as any other live performance is. I actually like the term “visual jammer” better than “video jockey” because in addition to the fact that it sounds nicer, I think it is more aligned with what a VJ does.
As the essay says, “a VJ always visualizes something else” and it is rare to see a VJ performance by itself. I wonder what performance of the opposite would be like; audio representing the visuals. However, I am not sure if I would still call it VJing if that were the case. I guess the reason why Vjing is not seeing as art is because often times there is no conceptual meaning behind the visuals. But I do not think this is a bad thing. In my case, I genuinely enjoy creating and staring at visuals because although there might not be any conceptual meaning behind the them, they do take me to a meditative-like state, and I really appreciate this. And in the club scene, visuals really add to the experience, special in techno clubs and more underground settings.
Honestly, one of my “dreams” is to be a VJ at some point. I love when I go out and see cool visuals projected or played on a screen. I recognize the fact that being a VJ means that your work is not the central point of the event, but I would not mind VJing for some time. However, it is true that I would do this for fun rather than for a stable financial support.

RAPS Midterm

Valentine My Funny
Project Abstraction

Valentine My Funny is an audiovisual performing piece. In this piece, Maxwell and I combined both physical objects and MAX in order to accompany the musical piece Valentine My Funny, by F.S. Blumm and Nils Frahm.

Project Description

Maxwell and I did a realtime audiovisual performance to Valentine My Funny, by F.S. Blumm and Nils Frahm. Through the combination of material objects and MAX, we created an abstract performance to visually accompany the music piece.  We created a stage made out of a sheet of paper and cardboard. Then, we projected light onto the paper and moved objects around the space and record the projections of the stage with a webcam. This feed was then modified in MAX and projected for the audience to watch.

For us, this piece was more about experimenting with physical objects as a way to create abstraction that would fit the musical piece, rather than expressing a particular concept. Maxwell had previously done a dance piece to this song, so he suggested it and we both agreed this was the right piece for our performance. It is a very sparse song, which means that this would also make the audience focus more the visually aspect of the performance. So we just got inspired by the song to create certain movements with the objects and were looking for abstraction. We were also looking to create some of the visual styles that some early light artists used, such as Oskar Fischinger’s work.

Perspective & Context

We were very interested in the earlier experimentations of abstract films done by artists such as Oskar Fischinger (Komposition in Blau) and Walther Ruttmann (Lichtspiel: Opus II). Thus, I would say that our performance fits with their projects. We tried to make the visuals accompany the music piece, as Oskar Fischinger did, and tried to represent the movements that Walter Ruttmann made. Overall, Maxwell and I were focusing more on the material side of it, rather than on the technological side of it. Thus, we fit more with earlier works which did not really on technology very heavily.

Development & Technical Implementation

Materials: bottle of water, thread, light gels, cardboard, paper, paper clips, glue gun, cutter, glass container, balloon inflator, book, spotlight, cardboard box, tripod, webcam and other small objects.

We started by creating the stage. To make the stage, we used cardboard, which kept the paper still, and attached the paper with clips. Then we set up the light, which projected behind the paper, allowing the webcam on the other side to record the shadows. So we started collecting a variety of different objects that produced interesting shapes and shadows, and used light gels in order to change the background color. We also modified some of the objects to make them easier to use, such as adding thread or cutting cardboard and some light gels to create patterns and shapes.

Once we had some ideas for the performance with the objects, we started creating the MAX patch. We also wanted to keep this simple, but at the same time make the performance a bit more visually stimulating. Thus we modified the brightness, hue and saturation of the feed and added continuous rotation to the video to make it less static.

Performance

Our performance went very well. I actually think it was one of the best versions of our performance. Although I was quite nervous before we started, I think we did a pretty nice job performing our piece and luckily, it all worked together.

I think what worked really well is having one person taking care of the ‘stage’ and the other one taking care of the software. In this case, Maxwell was mainly with the physical objects and I was with MAX. Although I helped Maxwell in several occasions throughout the piece, I think that assigning one person to be paying attention to the software is very important. For instance, there was a moment where the screen went completely red when switching between different light gels, which I was not expecting. But since I was in charge of making sure that the MAX part of the project was going alright, I quickly tried to fix it. But there is always room for improvement and maybe we could have practiced a bit more. For instance, I would have liked to control MAX better when switching the light gels because sometimes the effects would go crazy and would show as either overexposed or underexposed.

After watching the recording of our performance, I felt that we were actually quite successful at creating this experience. Although this was not our intention, I think the performance was magical in a way and that it was successful at changing the atmosphere. But I am really happy with the way our performance turned out.

 

Graphical Score

Out Graphical Score mostly represents the musical movements of the song. However, some parts do represent the objects or shapes we do, such as the book ( in the end of the third line) and the circle-semicircle-circle-semicircle. We also divided the song into different colors, which do not exactly match those that we used for the light gels. This was useful when thinking of the different stages of the song and the development throughout. Since our performance depends heavily on the material objects, we always modify some parts as we are performing, depending on how long it takes us to do a certain movement. So our graphical score is a guideline for us to follow the piece, but we do not let it limit us when performing.

Conclusion

Overall, I am very happy with our project. I got much comfortable with using Max (also because of Eric’s help) and now I feel like I can use MAX much better and I am not as afraid to experiment with it. I also noticed that my laptop gets very ‘overwhelmed’ whenever I start doing more complex things in MAX, which I find very annoying and wish this problem could be fixed. I think we could have controlled MAX a little bit better so that the output is always clear to see. But I am very happy with our performance. I think we did a great job and this project definitely inspires me to keep trying different methods of performing realtime audiovisual pieces.

IMD: Assignment 3 & 4

Assignment 3 & 4

Silhouette

In this silhouette exercise I pretty much followed our in-class procedure. But I made two versions of this exercise. In the first version, I made it seem like the subjects are aligned in a row. Here, the camera moves along the z-axis to give depth to the sequence. In addition to this, the videos start at different times. Here is the sequence of this first version:

 

For the second version of this exercise, I decided to have no camera movement. In this case, I simply copied and pasted the footage many times into the timeline, and I also changed the color of each of the copies in order to make it look more interesting. Furthermore, I delayed the starting point of each copy. Here is the sequence for it:

 

Particles

For the particles exercise, I decided to have a sequence for a potential version of my logo. I started by creating particles for the background. Then, I created the particles that are later seen through the logo. Since my name is Marina, and Marina in Spanish essentially means ocean, I decided to make the particles very tiny and blue in order to represent water particles. After, I created my logo with the pen tool, which then allowed me to created a mask for the outline of the shapes. So the sequence starts by having particles in the background, then moves on to showing the logo with the particles, and then this logo expands and fades out of the screen, introducing a graphic version of the logo. To do this exercise and give the particles different motions, I mainly changed the physics values of the Particles World effect. Here is the end result of this exercise:

https://www.youtube.com/watch?v=6PrxcMuCzRI

Morphing

I started doing this exercise in a very different way than what I ended up having. My original idea was to split the video into many, many different squares, and them place them randomly within a cube shape. However, I did not realize that I would have had to create separate layers and masks for each of the squares, and this would have taken me forever. Thus, I followed Kefan’s method of splitting the video vertically and placing each rectangle in a diagonal shape, forming a stairs form. Below, you can see the two images I choose to morph (front view and back view of a 3D model of me) and the top view of the layout of these images in After Effects:

Open-Option

For the 30 second open-option piece I decided to use Particles.

Motion Capture

For the motion capture exercise, I followed the what we previously had learned for tacking a set a pixels and assignment those points to a different object. The only thin I did differently was the background color and the ring color.

High-Speed

I took this video outside of the AB, by the garden. As you can see in the video, I poured water onto the plant to make this exercise. To record this footage, I used my phone and used the slow-motion video effect, which allowed me to record at 720 p at 240fps. However, since the only way to record at these values is by using the phones slow-motion effect, I had to undo the slow-motion manually in After Effects so that I could then divide the video at different percentages of the speed. Thus, once I converted the video to realtime speed, I divided the sequence into 50%, 25%, 12%, 6% and 3%.

Horror Scene

Since the horror scene is only 30 seconds, I wanted to keep it simple. The scene starts with the subject smiling naturally, and then as the time passes the subject starts doing weird movements as if he was beings possessed by something or about to turn into a monstrous creature. When recording the scene, I started by holding a spotlight at his eye height, and then as the scene develops, I lower the spotlight in order to create a more frightening environment. His shadow also gets bigger as the spotlight lowers, and I think this suits the piece very well.

In terms of the sound, the two audio files that I used for this video were found at FreeSound. I added a white noise sample and a scary distorted sound sample. I think they fit my piece.

As to the video, I recorded the footage and edited in After Effects. I used several different effects, such as the gaussian blur, a red tint, brightness and contrast, echo, and turbulence displacement. Most of these start at 0 and increase throughout the video, so that it gets scarier and scarier as the subject starts getting weirder. I also added a glitch effect by downloading a glitch screen recording and applying it as the displacement map of an adjustment layer. I added this towards the end of the footage, also increasing steadily, and then increased it further for the last text screen of the end of the scene, which says “One day all of this won’t matter”.

 

 

Digital Fabrication Final Project Documentation

Visualone

Visualone allows you to project visuals by just using your phone’s flashlight.

Project Statement

Recently, in my Realtime Audiovisual Performing Systems course, we have seen many examples of artists working with light and projections of abstract shapes in order to make short films and animations. This is something which interests me very much, so my purpose for this assignment is to create my own little visual ‘machine’ by only using my phone’s flashlight and the materials that I will laser cut.

The purpose of my project is pure entertainment and visual stimulation. Visualone does not necessarily resolve an everyday-life problem. However, I have been wanting to buy a projector for this specific use for quite a long time, and since I still have not bought one yet, I think this is a great opportunity to make something similar to a projector.

 

Inspirations

Marry Ellen Bute is the main artists that inspired my for my project. Marry Ellen Bute’s main artworks are about visual music. She created an oscilloscope that could be played like an instrument and that is how she created her visual music. In my project, I will only concentrate on creating the visual aspect of it, as this is the aspect which interests me the most. My project will not be nearly as complicated as Bute’s oscilloscope, but I am very interested in using light and laser-cut templates to create my visual machine. Below, you can see one of Mary Ellen Bute’s artworks.

https://www.youtube.com/watch?v=3kV6MmwO86A

Another of the works that inspired me was this 3D printing system, where Trussfab creates joints which are then attached to plastic bottles in order to create large-scale structures. Although I will not be creating large structures, the reason why this was inspiring for me is because by creating very tiny connections, you can actually connected larger objects together. Thus, I want to create joints that are easy to attach to the templates, so that numerous templates can be placed on top of each other, while at the same time allowing them to rotate manually.

(In the final version of my design, I did not end up 3D printing a joint like this, but only because I found an easier way of placing the templates.)

My own iteration of this project will be an improvement mainly because I am making it so that I can use it with my iPhone’s flash light. Thus it’ll be relatively  easy to use and to produce. Furthermore, if I get ‘bored’ of the templates I have, I can design new ones and use them.

 

Project Design and Production

My original idea for the design of this project was to place the phone downwards with the flashlight facing the top face of the box. And the templates would be placed one on top of the other. Here is a representation of what that would have looked like:

However, I realized this was probably not the best way to design Visualone, I decided to change my design for the box and the mechanism to rotate the templates. Now, the phone would be placed sideways, the the flashlight still facing the hole in the box for the templates, but the templates were going to be on a side of the box instead of on its top. And instead of having 3D printed joints to rotate the templates, I decided to simply give the templates a gear shape and to place them on top of round platforms which would allow them to rotate manually. Thus, after rethinking the project, I created a very simple prototype made out of cardboard, where the main structure of the object looks like a box. To make this prototype, I simply cut measured pieces of cardboard and hot-glued them together. My end product looks pretty much the same, except I fixed a couple of things. Bellow you can see my cardboard prototype.

Once I had this, I started making the Illustrator files. The first thing I created was the box for the main structure. I used MakerCase to create the outline of the box because I figured it would save me a lot of time. Once I had the layout of the box, I created the hole for the light and the templates. At the bottom of these pictures, you can see the template holders which are 5mm wood, and above that, you can see the ‘walls’ between each of the templates, which were made out of 3mm wood. The templates were made with 3mm acrylic.

 

To make the templates,  I created gears in Illustrator by using the star shape tool and gave it many sides. Then, I created two ellipses and with the pathfinder tools, I was able to create a gear shape. Once I had the gear, I added the shapes that would later be laser-cut. I do not want concrete figures for the visuals, so that is why most of the shapes are either patterns or other shapes. For now, I only have four templates, but I will probably design more in the future. Once I had my templates ready, I laser-cut them. However, the first time I laser-cut the gears, I had calculates the measurements wrong. The center of the gears was smaller than the hole in the box. Furthermore, the main reason why I wanted gears to be the shape of the templates is so that I could rotate them by pushing the gear’s teeth. But unfortunately, in this first try, I made the teeth too small as well, so I could not grab the teeth from the top of the box. Thus, I redesign the gear and fixed them. The second time I laser-cut them I got the right result.

 

Once I finished laser-cutting, I put the pieces together with hot glue. This is the final result:

 

And here are some of the visuals it creates:

conclusion

Overall, I ended up changing my design quite a bit because I realized the original design I had thought of was not very effective for the result I was trying to have. In order to create Visualone I used Illustrator as the main software for my project, as well MakerCase, which is an online resource to make boxes. Once I had my files ready, I laser-cut 3mm wood (for the box and the template walls), 5mm wood (for the template holders), and 3mm acrylic (for the templates).

I am pretty happy with the result, but I would probably like to use a stronger light so that the visuals look nicer, and maybe even make the box and the templates a little bit bigger.

RAPS: Cosmic Consciousness – Reflection

Cosmic Consciousness – Reflection

Although all of these artists were interested in provoking a sense of synesthesia through their work, Brakhage’s work is quite different from that of the Whitney brothers and of Jordan Belson, at least in terms of the visual component of it. In contrast to the Whitney brothers, Brakhage did not use new technologies to create his work. However, Brakhage’s work seems very innovative for the period when it was made, even if it was just through the use of recorded films and paint. The Whitney brothers’ work relied very heavily on the science and technology they used. They built their own machine to create beautiful abstract patterns which they animated. It is obvious that their work was very shocking and inspiring considering how many special-effect artists had them as reference for their work. Jordan Belson was, like the Whitney brothers, very interested in Eastern metaphysics, and in his case, yoga was very influential for his art. But in contrast to the Whitney brothers, he used both old and new technology. He also did very abstract films with many patterns.

With the work that theses artists made and the developments that followed, there was a much bigger emphasis put on the immersiveness of the audience. As the author says, the interest in abstract and psychedelic films “flowed into a sea of mass culture” (159) to the point where bands such as Pink Floyd and the Who had this type of films projected while performing. I think that the fact that some following artists did not have the skills to create such complicated works made the rely on simpler methods, such as mainly working with liquid onto plates. However, Vortex did also use technology to create their films. Where the previous artists’ works seemed to be more intimate in a sense, Vortex seemed to be more our in the public. For me, it is as though the following works moved a little bit away from the spirituality of Belson and the Whitney brothers and more into the psychedelic side of the abstract films.