RAPS: BEAP + Vizzie Exercises

BEAP + Vizzie Exercises
Exercise 1

I used the virtual keyboard and connected it directly to the granular oscillator. The granular oscillator is then connected to an audio effect. The audio effect that I used at the end of the signal chain before sending it to Vizzie is the “returner”. The audio output is then separately sent to Vizzie in order to have each range of values affect either the brightness, the contrast, or the saturation of the video. Then, the “returner” and the gate, which is connected to the keyboard’s gate, are attached to the signal, which is then attached to the stereo output. The output of the signal is also connected to the audio splitter which is what allows different ranges of data to modify things in Vizzie. In order to make the amplitude of the audio output determine the visibility of the video, I used audio2vizzie to get the data and attached it to the video fader between the video and a black background.

Exercise 2

I used three separate sequencers for each drum track. These were then connected to an audio effect. The audio effects that I used are the returner, the frequency shifter and the classical vocoder. I then attached the audio effects to an audio mixer and finally to a stereo output. For the videos, I imported 3 different files and mixed them together with a video mixer. The audio data is what determines when each video will be displayed, since I used audio2vizzie to determine this.

Digital Fabrication Final Project Proposal

Visualone

Visualone allows you to project visuals by just using your phone’s flashlight.

Project Statement

The purpose of my project is pure entertainment and visual stimulation. Recently, in my RAPS class, we have seen many examples of artists working with light and projections of abstract shapes in order to make short films and animations. This is something I am very interested in and would love to make my own little visual maker by only using my phone’s flashlight and Visualone.

Visualone does not necessarily resolve an everyday-life problem. However, I have been wanting to buy a projector for this specific use for quite a long time, and since I still have not bought one yet, I think this is a great opportunity to make something similar to a projector.

Inspirations

Marry Ellen Bute is the main artists that inspired my for my project. Marry Ellen Bute’s main artworks involve visual music. She created an oscilloscope that could be played like an instrument. So this is how she created her visual music. In my project, I will only concentrate on creating the visual aspect of it, as it is what interests me the most. My project will not be nearly as complicated as Bute’s oscilloscope, but I am very interested in using light and laser-cut templates as my visual machine.

Another of the works that inspired me was this 3D printing system, where Trussfab creates joints which are then attached to plastic bottles in order to create large-scale structures. Although I will not be creating large structures, the reason why this was inspiring for me is because by creating very tiny connections, you can actually connected larger objects together. Thus, I want to create joints that are easy to attach to the templates, so that numerous templates can be placed on top of each other, while at the same time allowing them to rotate manually.

My own iteration of this project will be an improvement mainly because I am making it so that I can use it with my iPhone’s flash light. Thus it’ll be relatively  easy to use and to produce. Furthermore, if I get ‘bored’ of the templates I have, I can design new ones and use them.

Project Design and Production

In order to create Visualone I will use Illustrator and Rhinoceros as the main softwares for my project, as well as laser-cutting and 3D printing as the methods to reflect what I design. Digital fabrication is crucial for this project because I need it’s accuracy and perfection to both design the templates and create the joints for the templates.

Here is a sketch of the very basic structure of Visualone. It will probably be a box where the user puts their phone inside of it, facing downwards so that the flashlight points directly to the templates.

 

Here you can see some of the designs I have created as an example for what the templates of Visualone will end up looking like:

 

 

IMD: Ron Fewik

All the works shown in Ron Fedwik’s research are incredibly well done. Although you can tell the animation are not ‘real’ in the sense that you can tell it is not a video, the movement of the particles very well simulated and feel like they are very natural and almost perfectly represented.

All the works amazed me very much, but some of the animations that surprised me the most are: Energy Conservation for the Simulation of Deformable Bodies, Fully Automatic Generation of Anatomical Face Simulation Models, Codimensional Surface Tension Flow on Simplicial Complexes, Simulating Free Surface Flow with Very Large Time Steps and the higher resolution facial model that is currently being built.

RAPS: Early Abstract Film – Reflection

Early Abstract Film – Reflection

Norman McLaren – Loops (1940)

As the NFB states “a central McLaren belief was that in film “how it moved was more important than what moved””.  I really enjoy Norman McLaren’s work because he is able to give character to shapes that are very abstract. He does this by analyzing human movements and applying them to his little weird creatures. Hence, in my opinion, McLaren creates a dialogue between the figures in a way that it seems as though there is a story behind the abstract shapes and how they move, even when there isn’t one. In many of his pieces, he also morphs almost every figure into the next, as he does in Loops, which I find very pleasing to watch.

As it is stated by the BBC Radio, McLaren created “a form of ‘visual’ or synthetic sound made by hand-drawings on the sound-track of the film seen in … Loops.” Thus, the audio and the visuals of this animation are synched together. McLaren had synesthesia, so I think this characteristic is seen in many of his works, which makes it very interesting and engaging to watch. In his films,

 

 

 

DSFA – Assingment 1: Pop-Up Me

POP-UP ME

Mugshot

I took mugshot photos of my self from the front and from the side view. In Photoshop, I aligned the images and by using guides to make sure that the eyes, ears, mouth, and essentially the most important features of my face were align as best as possible. Once my pictures where ready, I imported them into Maya and used camera objects to project them on the center of the Scene, forming a cross so that I could use those images as a guide and start 3D modeling my face.

 

Nose & Nostrils

The first thing I started modeling was my nose and nostrils. The first model of my nose was divided into 7 different pieces for the front part and 4 pieces on the sides. Here is a drawing of this:

                    

I went through many different stages modeling my nose, so here are some pictures of the process and the final version of it:

  

 

Mouth

For the mouth, I had to use nine pieces around the lips, in order to create a circular shape. I started on the outer part on the lips and extruded inwards forming rings, until I finally had the shape of my lips. But I made sure not to close the mouth completely so that I can later animate it.

 

 

Ear

To model my ear, I focused on creating a question mark shape divided into 10 pieces. However, my ear is not exactly shaped like a question mark. Instead, it is more like a half ellipse, so that is the main shape I ended up modeling.

           

I first created the main outer part and then continued to close the ear and join it to the skull of my model.

 

 

Eye

The structure of the eye is similar to that of the mouth, but instead of being divided into nine pieces, it is divided into ten. Thus, I first created an ellipse shape of the squares around the end and continued by working inwards. I also tried to create my doble eyelid.

 

Skull

For the skull, I started by creating a column from the top part of the nose to the top of the head and to the back of it, and similarly the side part of the eye to the ear, and from there, again to the top and to the back of the head. Each quarter was divided into five parts.

 

Neck & Torso

Mandala

For my Mandala, I mainly used spheres and cones. I placed them in form of rings and made them rotate differently around the x, y, and z axis. I also made the radius of one of the rings translate along the y-axis while having it change its radius.

Here is the video of my mandala in motion:

 

IMD: Assignment 2

IMD: Assignment 2
Infomotions Piece

For the infomotions piece, I used the things that describe me best in terms of my work and and my background. Thus, I decided to share some details such as my name, my height, where I live, the languages I speak, the year I was born,  and some skills and things that I like, including generative art, photography and creative coding.

Instead of using a type font, I decided to use my handwriting because I think this makes since the infomotion piece is about me, I though my handwriting would represent me the best. I choose to use my more personal details in the shots where I appear, and my skills in a black background alternating between footage of me and the black background.

For the handwritten words, I used a tablet and wrote them in Illustrator, which I then imported into After Effects in order to add them to the sequence. And I also did my logo in Illustrator, but I not completely happy with the result because I was not able to get the exact spiral shape which I intended to create, so I will try to improve this with a different method.

Here is my infomotions piece:

Kinetic Typography

For the kinetic typography piece I decided to use the lyrics of a song which I really enjoy: “The Moment” by Tame Impala. This song is very special to me so I really liked creating a kinetic typography piece for it. I used After Effects to animated the

I used several different fonts for this piece. Tame Impala is a psychedelic rock band based in Australia that started playing in 2007. So although it is not a very old band, I researched what were some of the fonts used by psychedelic rock bands during the 1960’s in the USA. Based on my reason, I selected the ones I thought would fit Tame Impala the best. Among these were: Prisma, Filmsense, Davida and Amelia. And I then also added two other fonts which I thought were a bit more simple, such as Nuri and Times New Arial.

 

Here is my 10-seconds kinetic typography piece:

Open-Option Piece

My open-option for this assignment is a continuation of my 10-second kinetic typography piece. For this sequence,  I changed something in order to make it more exciting and used a longer part of the song.

IMD – Assignment 1

Interactive Motion Design: Assignment 1

What is interactive motion design?

When I think of motion design, the first characteristic which comes to my mind is something that is visually stimulating. Nowadays, it is very hard to make something that stands out from the rest of the media pieces, because everything has become visually stimulating to some extent. But I strongly think motion design should at least be interesting to watch to some extent. In addition, when talking about motion, motion design clearly needs to have an order of frames or pictures so that it seems as though the figures in screen are animated and in action. Furthermore, interactive motion design means that there is a cause-effect relation between the user and the work. When the user makes an action, the work reacts to this and triggers a function. Including motion design to user interaction makes the piece more engaging and easy to use. And lastly, I think motion design should also provoke an emotional reaction to some extent, but it is not required.

 

Dynamic Grid

For the dynamic grid exercise, I wanted to experiment with 2.5D motion, so I decided to make a bird flying sequence by using images and animating them. In order to do this, I first selected some images that I had previously taken and adjusted them in Photoshop. I then used After Effects to translate the position of the bird images so that it looks like they are flying and forming a grid with their flying path. I also created a camera object and placed it in a 3D space to make it look more realistic. Here is the result:

 

 

I was not very satisfied with this, so I tried to think about other ways to represent a grid system and I thought of perspective. I then realized we had learned about false perspective in class and thought it was a good idea to add this in. In this next piece, I placed the background in a 3D space as if the center of perspective was in the middle of the working board, and I used only one image of a bird to animate its position and make it seem like it is flying in different depths.

 

 

But again, I wanted to create a sense of false perspective while creating grids. Thus, instead of having three images to create the dimensional background, I just used one of them while still having the bird fly in the same position.

 

 

But I was still not very satisfied with the result, so I decided to use Processing instead. In this piece, I used the dynamic grid system as a way to play with the visibility of shapes.

 

False Perspective

The first time I made this exercise, although I followed every step we learnt in class, my motion tracking was very shaky, so I redid it and fixed the issue. All I had to do was make the motion tracking box bigger so that it would track a greater amount of pixels and have a more accurate tracking. Below you can see the original piece and the fixed one:

Open-Option Piece

For the open-option piece I decided to use Processing because I was having some issues with After Effects and I simply did not know ho to use it for what I wanted to make. So this piece is a bunch of lines forming a circle which are still on one end and in motion in the other end.

 

DigiFab – Assignment 2: 3D Modelling Eyeglasses

3D Modelling Eyeglasses

Description: For this assignment I decided to create a pair of eyeglasses for two reasons. The first is that I wear glasses, and secondly, I really appreciate my vision and the eyeglasses represent the way I see the world. All the modelling is done in Rhino.

Process:

In order to make a 3D modelling of the eyeglasses, I started by creating two rectangles in order to make the temple and the earpiece. I then joined the shapes and continued by filleting the curves to give it a more organic look.

I then continued by extruding the surface, in order to give it volume.

Then I used ellipses to create the glass, and also extruded and duplicated the shape to give it volume.

Next, I used an arc to make the bridge.

 

And lastly I made another set of ellipses to make the frame of the eyeglasses. Here is the final result of my model:

DigiFab – Assignment 3: 3D Printing a Seashell

3D Printing a Seashell

 

Description: For my first 3D model I decided to create a seashell because I really wanted to 3D print something that has a spiral shape, so I though a seashell was the perfect example to create a spiral shaped object. The 3D modelling was done in Rhino.

3D Modelling:

Rhino has a spiral curve tool which I used to create the main shape. Then, I created two ellipses on both ends of the spiral curve, and used the “Sweep Rail 1” tool to give it volume. However, the shape that the spiral shape tool creates is actually quite different from that of a seashell as you can see below.

 

Since the shape was not accurate enough, I had to edit the points of the spiral curve. This took a very long time because you can’t modify the points of the curve once you make it a volume, and of course, it is hard to tell whether the volume will be shaped correctly just by looking at the curves. So I had to go back and forth in the process until I got the shape. I also ended up adding many more ellipses to my figure because I realized the width of the volume at the end of the seashell was not thick enough, so there was a lot of space in between each ring. Also, as I added more ellipses, it was more complicated to create a smooth spiral shape, as you can see below:

 

 

Here you can see the final organization of the curves and the final volume shape of the seashell.

 

 

   

 

3D Printing: 

Once I had my 3D model, I continued by 3D printing it. I used the Cura Software, which created some support for the parts of the seashell that needed it. And finally, I used the Mini printer to 3D print it. This process worked fine.

 

Notes: I really wish there was a way I could adjust the dimensions of the shape after having swept the rail because since I had a really hard time making the top end of the shape, because I would not get the positioning right. This would it made it much easier and faster to 3D model.

 

RAPS: Audio-Responsive Vizzie

Audio-Responsive Vizzie

For the Audio-Reponsive Vizzie I mainly used Vizzie’s FRACTALIZR Effect to change the parameters of my videos. With the audio splitter I used the low values to change the rows of the fractalizer, the medium values to alter the tint of the video, and the high values to determine the fractalizer’s mode. Lastly, I decided to use the Audio2Vizzie values set the number of columns of the fractalizer to split the video.

 

Here is the link to my patch:

https://gist.github.com/MarinaPascual/e14c54be56da48a87aa53a3480351be3