Makeup homework documentation

For this homework, I made a project commemorating the revolution of 1848 in Hungary. The project displays a famous painting of the revolution. There is also a red target displayed on the mouse, and when the user clicks, the picture blows up to particles and also turns red. With this, I wanted the intended user to remember how violently the revolution was crushed by force.

I also made some glitch art with Audacity and Photoshop. They are reflecting on current political tensions in Hungary:

Chin-T Generator: Generative Systems Final Project

Project Presentation: Generative-Sytems-Final-Presentation_PDF

Description:

This project aims to be a text, shape and placement generation tool using both Chinese and English. The end idea is to surprise and help users visualize clothing prints.
As a complementary resource, a Vinyl cutter will be used in order to create physical output that the user will keep as a reminder of the experience.

Inspiration
this project was inspired by an observation about the quality and content of prints in people’s clothing, especially in Shanghai. There is an evident increase in the use of mixed language by using both Chinese and English to communicate. It results interesting to see how both languages, slang, and words can compliment each other.

Development
The process has begun by consolidating the idea and the content, answering questions such as what do I want to communicate? and why in this way?.
After evaluating some of the topics covered in class such as generative text and generative image, and being inspired by some of the artists shown in class I had a better idea of what I wanted to do.

I started by setting up just one example, defining and redesigning the graphics. Then I experimented with some of the codes that we were offered in class and I mixed and shifted them into my idea.

After I have more or less set up the text in the way I wanted it to flow, look and be placed, I moved into incorporating the Chinese characters into the code.
I used PNG images that I created for each character I needed.

I spend a good amount of time figuring out the role the vinyl cutter was going to have within my project. I learned about the software needed and the materials that could potentially be used. I understood the importance of the quality of the image exported from processing, and figure out that thickness and definition are key for a decent result.

After having figured out the basic code and the output, I moved on into developing a four-item collection that could better project my main idea into context.
I had the chance to play around with the images and the role of each item in the collection.

Obtaining the following as a final result:

What went wrong:
There are many things I think could be improved. Coding turned out to be a long and challenging process. I would like to add the random factor into the core code.
The manifesto should have been more focused on the context of the project and the class.

I think if I continue to work on this project it will have the potential to become something relevant in the intersection of culture, generative systems, fashion and design.

Self Portrait – Cheryl – Documentation

Link to Presentation: https://docs.google.com/presentation/d/1miAjaWnoRTgejw2ThOFHyr1rMUAYkH3iFYgvztDMVNs/edit#slide=id.g353217061b_0_61

Project Description:

Self Portrait is an interactive installation that tries to explore what we are looking at when we are looking at ourselves, and further, how does the way we look at ourselves change the way we look. This can be perceived on a literal level: you notice what you like and don’t like when you are looking at yourself, and this changes how you look in a long term; it can also be perceived on a more conceptual level: what you see in yourself can be largely affected by

how you feel about yourself, not just appearance, but also personality, ability, and more.

The way that this project is trying to answer these questions is by visualizing the trace of the user’s look. When visualizing the trace, I am using techniques from image manipulation and generative art to represent the complication and mystery of what people’s seeing in themselves and how it’s affecting them.

Project Inspiration:

This project was inspired by Inessah Selditz’s project of Eye portrait where two users’ self portraits are manipulated by how and where are they looking at each other. What she was trying to explore was the the space and communications between people. But my alteration is more to explore oneself.

 

Initial exploration:

I started trying to have effects follow where the mouse is moving, and later applied eye tracking to it. It was simple and self-explanatory. For eyetracking, I’m using an equipment called Tobii, and a library on processing by Augusto Esteves that help me map one’s gaze on the screen.

Later Development:

As I was working on this project, I thought that it is not generative enough. As I stated in my own manifesto that there’re three irreplaceable elements in a generative art piece – the artist creating the project, elements that are out of human prediction, and audience experience and interpretation. I thought i was lacking in elements that are out of human prediction. So I started to play around with more random and generated pattern. I also started with my original idea of putting different effects in different grid, and having each grid affect each other. Here are screenshots throughout the processing of developing the final sketch: 

(it might be the photo or the effect, but I actually like this better than the final product)

These are screenshots of the final effects:

 

I think the final product is too busy in visual and not planned out very well. I think effectwise I really like how the grid in the upleft corner turned out.

Here’s a video of using eye tracking to manipulate the project:

What I like about the project:

The interactions are pretty self explanatory. The users normally don’t need any instruction to be able to navigate around the project. The red button on the capture scene gets the user’s attention to press it, and what happens after that depends on where they are looking at.

What I don’t like about the project:

I’m actually not very happy with the final result of this project. I think there’s two reasons/two aspects on which I could improve on:

  1. Visual – the visual is really busy. I think to an extent a busy/unpleasant visual can be justified by the purpose of the project. But what I have right now just makes me feel really unorganized.
  2. creativity in terms of image manipulation – I had a hard time coming up with what to do with each grid, which is strange because there’s so much you could do with generative art and image manipulation. I think I didn’t give myself enough time into thinking about what kind of effects I want and what I try to say with them, but rush too fast into making it and making it work.

Final Project: Nombre Redondo

NOMBRE REDONDO

By: Marina Victoria and Nathalia Lin

Inspiration

Nathalia’s passion for mandalas and my passion for patterns led us to this piece. Our purpose with this project is to show people’s unique and diverse identities. We waned to make the piece interactive and realized that an individual’s name is one of the first things we identify them with. This is why we decided to create mandalas with people’s names. However, although two people may have the same name, this obviously does not mean they share the same identity. Thus, we want to express this through the implementation of certain degree of randomness in our work while also allowing the user to decide whether to change certain styles of the mandalas or not. So even if the same name is typed several types, the final design will be repeated.

 

Description

This is an interactive piece where the user has to type their name (or any word) and a mandala is created based on the keys that are pressed. All the mandalas are represented in different shades of grey and white, over a black background. The fill and stroke of the patterns can be changed by clicking the mouse in different parts of the screen. In addition to being able to change the color, each time the mouse is clicked, the pattern of whichever key was pressed last will rotate by a random angle on top of the already drawn pattern.  Furthermore, the radius of the patterns created for each letter increases every time a key is pressed.

 

Process

Firstly, we had to figure out what sort of aesthetics we wanted to have for the mandalas. So Nathalia sketched some possibilities for the shapes by hand. We thought it wouldn’t be too hard to make these figures. However, when we actually started coding, we decided to explore other shapes and not only focus on these, since some of them were harder to code than we had expected. Here is the sketch she made:

We decided to have each figure appear in the center of the sketch. Then, we figured it would be good to have the radius of the circles increase gradually. Thus, in order to achieve the enlargement of the radii, we simply set a global variable for radius of the patterns and made it increase if it is true that a key is being pressed. We then created a class for the patterns and set some of the main variables we would need in its constructor. These included variables for the position and the radii of the shapes. After, we created 26 different functions for each letter of the alphabet inside the class, and then called them in the draw function with many if statements so that if a certain character key was pressed, its assigned pattern would be drawn in the sketch.

Finally, after creating a basic template of how our code would work, we started designing the actual patterns. Nathalia and I divided this task in half: I designed letters “A” to “M” and she did “N” to “Z”. After having multiply variations of for loops, sine and cosine functions, shapes, translations and rotations we finally created the 26 different patterns. At first, we made all of them have a white fill or stroke, but then we decided to make it vary within the grey scale to give our madalas a little bit of depth. We wanted this to be random. However, we thought it would be nice to make the user interact with our project a little bit more so we added a function and mapped the color to the mouse position within the sketch, so that if the mouse is clicked, the color value would correspond to wherever the mouse was clicking. In addition to this, we also added a function to make whichever shape was drawn last to rotate and be drawn once again. And lastly, we created a function to make the variable of the radius reset and the background be redrawn in order to clear the canvas.

After hearing people’s suggestions for further improvements, we decided to create a “home page” which gave instructions on how to interact with our piece because we noticed that the possible interactions were not clear enough for the viewers. So we created the following home page:

Furthermore, some people asked if they could save the sketch as an image, so we also made this possible.

 

Visual Support

 

Chaos Scrabble – Final project documentation

Follow-up on my artist manifesto.

I made some updates to the project I had presented last class.

These updates include bigger text size, different color (white/black) to the matching letters, creating a vocabulary related to the song and eliminating bugs.

I needed the bigger text size so that the lines are more easily readable. I had a small challange here, because if the size of the text was too big, the final shape of the pattern on the screen wouldn’t have been very pronounced and instead the text would have just taken up the whole screen. The shift from empty space to a few readable lines to the chaos in the middle was a crucial part of the visuality of my project, and I didn’t want to give it up. Finally, I arrived at the ideal text size of 24 pixels.

The matching letters now are of different colors too: while the rest of the text is red, green or blue, the matching letters are either white or black, depending on the version of my product. More on that later.

I edited the vocabularies so that the adjectives, nouns and verbs are based on the lyrics of the song playing. Mostly they are words from the lyrics, but I included some other words by association.

I also had a bug that made the new lines choose an existing line to be based on regardless of whether they were vertical or horizontal. This made highlighted letters appear where they shouldn’t have been, and some lines popping up seemingly unrelated to any other existing lines. The other bug I fixed was that the string would sometimes consist of ‘null’ values. It doesn’t anymore.

Things That I Wanted To Work But Didn’t And Why It’s Totally Not My Fault.

Ideally I wanted to make the project in a way that the user could switch between songs, and could change the background color between black and white. I also wanted to display a variety of shapes next to the lines that are programmed to never intersect with the lines. These didn’t work because:

1) Memory-issues – changing back and forth between two songs makes the visualization stop without an error message, and when I try to add the shapes to their arraylist, the sketch outright asks me to increase the memory allowance in preferences. I increased the living crap out of it, but the sketch couldn’t run even when I let it use a whole gig of my computer’s memory.

2) Changing between the backgrounds wasn’t possible, because I only drew it once in the beginning. If I drew it again, all the text would be lost. ¯\_(ツ)_/¯

What I did instead.

I uploaded three versions of the sketch:

  • the first one can change between songs, and it can also crash real hard.
  • the second one is with one song, and with black background. The matching letters are highlighted in white.
  • the third one is with the other song, and with white background. The matching letters are highlighted in black.

Final Project Documentation, Grace Gao

Come home to me

Date: March 15th

A short video demo:

 

Technical Support: Processing, FaceOSC, Logitech Webcam C920

Concept:

The homeland(s) where we grew up in is THE place where we had our first encounter of nature, first felt the love from other people, and therefore, the source where our power and strength come from. We grow up, become stronger, and eventually, most of us will leave their homeland and start off somewhere new.

Generation after generation, people absorbed nutrition, left, and never came back.

But our homelands need our attention and love, too. They are gradually becoming devitalized, and eventually, they will decay and never be able to flourish new generations like before anymore, unless people who grew up in and left them can find a way to come home and give back their love and energy to the land.

Come home to me intends to give people an aesthetic and intuitive interactive experience of how their presence, attention, and love could change their homeland significantly.

Inspiration:

During the Chinese New Year break, I went back to my grandparents’ in a small town in the middle of central China named Jincheng. Having gotten used to living in a metropolis like Shanghai, I  hardly knew what Jincheng was like for now before I stepped out of the train station. And as soon as I stood onto the land of Jincheng and looked around, I was shocked by the un-developed look of it. It had lost the pleasant natural environment that it once had, the air and sky had a depressing color of grey, the air was terrible, yet hardly could I say these sacrifices succeeded in bringing people modern facilities and a modernized city.

I felt bad about the poor condition of Jincheng, because it used to be such a beautiful town that I would miss so much when I was away from it. People’s thoughtless decisions and lack of attention on it leads to its present situation. Old small towns need people’s care and love to come to life again.

Process:

I. Concept Development

Human elements are crucial. Face detection is not the most technically efficient way to carry out interaction, but it highlights the significance of “you” being there for your land. Using a kniect or distance sensor could make it easier to set up on site for presentation, but then anything could trigger the change, and if so the concept and information that I want to convey will be lost.

The participant’s face size that the camera detects is in proportion to their distance to the “home”. when they are away from home, the image of home is distorted and in grey scale. As the participant come near, home starts to regain its color, and by this point, the participant is already close enough to the screen and will have a warm visual reward of their home being enlighted by their being and attention.

II. Optimization

I started early, and after I had a prototype done, I wanted to optimize it to a better level.

Before optimization, before and after the participant come near the project looks like these:

and it works fine. The big problem is that processing was running extremely slow because I was actually making it drawing 4000 ellipses on top of the image.

So I decided to manipulate photo pixels rather than drawing so many ellipses. It was very different from the previous one, so it took me a few days to get it done. After optimization, the sketch could run much faster and Come home to me version 2.0 looks like this:

Since it is now faster I decided to add more interaction and generative elements into the project. I drew some flowers and planned to incorporate them into the sketch when the participant gets close enough and start to say hi (in this case I was also going to use minim library to trigger some sound telling people to try to say hi):

 

 

III. User Testing and Adjustment

Unexpected things always come up, but it’s fun to cope with them. In Tuesday’s class user testing session, I got surprisingly unanimous feedback that the previous version feels much better than this updated, faster version. I started to reflect on this and realized that along the way I was thinking too hard on how to make it run smoother and faster and somehow forgot about the actual visual effect as well as the message it carries, which are in fact the ultimate goal I am working on. Indeed the version 1.0 runs a bit slow, but the distorted feeling got well expressed, as well as the flourished scene.

I decided to change back to the old version. Since the sketch is slow enough, putting the flowers into it will bring a new particle system into it and make it even slower. Also, the vibe of the drawn flowers does not fit into the sketch well. I decided not to use flowers.

IV. Preparation Prior to Presentation

The night before presentation day I set up all the things I would need for a real show. This is my first time actually setting up physical installation for my project so it is very exciting. It’s also harder than I thought it would be. Some parameters that worked on my computer needed to be changed, so I slightly changed some part of my code before documenting.

Reflection:

Concept is essential. It is the artist’s job to enrich, expand and elaborate it.

Hold on to the most important thing and don’t drift away and get lost along the way.

Learn and explore, enjoy the process, no matter if all the efforts that you put into it are directly shown in the result.

Always try to learn more, from books, videos, people around you, you name them.

Special thanks:

  • IGS course instructor Cici Liu
  • Jiwon Shin
  • JH Moon
  • Romola Zhang
  • Quinn He
  • Amy Mao

 

Final Project: the visible invisible

Here is an introductory video of my final project:

I was inspired by an artwork Clinamen by Celeste Boursier-Mougenot and would like to create a new means for the viewer to experience their selves. Initially I wanted the user to appear as one “plate ” among a group of plates and get a sense of self through the clinking sound they make when colliding with the other. However, later I realized this gives the user too little agency and they might lose their uniqueness along the way. After a lot more thoughts on possible ways to discover people’s ecological selves, I referred to the codes we learned from week 6 about the text rain and managed to generate waves of falling “balls” that could stay on viewer’s bodies and thereby reveal their existence in front of the dark canvas. I name it “the visible invisible” because I think sometimes when people feel invisible and a low self-esteem, they might feel better if they think about their interactions with the surroundings. No man is an island.

One of the problems I didn’t expect to encounter was actually the color choices. Initially I set the background white and falling ellipses blue because I’ve always found it very beautiful and romantic when the snow falls on someone’s shoulders on a crispy winter day. However, after user testing, I realized the color choice reduced the immersion of my project and it didn’t fit well with the volatility of the ellipses and the chilling background music I chose to match the main theme. After a few trials, I settled on this set of colors, pink, light turkish blue/green, and dark turkish blue/green so the display wouldn’t be that eerie. As it’s now more casual and vibrant, I also noticed that users are more willing to free themselves, move around, and play with my project.

I think this coincides with what I stated in my manifesto: “We generate the simplest things. The complexity lies in our choices. The complexity lies in interpretations. The complexity like in chance.” Technically I generate literally the simplest things, waves of ellipses, but every user would get different experiences depending on their performances and interpretations.

Another difficulty I had to deal with was the brightness detection. Because the lighting in this academic building isn’t ideal, and I wasn’t able to borrow a big enough screen, at first the project didn’t work stably because I used Cici’s methods to detect the brightness/darkness. To solve that problem, I’ve tried to offer users black coats so they could be seen by the computer vision. I changed it to color detection later so now as long as the users are colorful against a white-ish wall, which could mean dark hair or even bright color clothing, they could be detected by the computer.

I would like to continue working on sounds. So far I only have one single background music, but in the future I hope it could be manipulated by users’ movement. I’ve thought about making this project as an instrument so everytime the user moves to a new location, a new tone would be generated. However, I also thought it deviated from my initial idea of giving users a chance to experience “self”. I am still working on possible ways to bring out one’s existence through music.

import processing.video.*;
Capture webcam;

import processing.sound.*;
SoundFile file;

ArrayList balls = new ArrayList();

int startTime = 0;
int timeIntervalFlag = 4000;
int lastTimeCheck;

int bMax = 300;

boolean fall = false;

void setup() {
  fullScreen(P2D);
  noCursor();

  webcam = new Capture(this, 640, 480);
  webcam.start();  
  lastTimeCheck = millis(); 

  file = new SoundFile(this, "sound1.mp3");
  file.loop();
}

void draw() {
  if (webcam.available() == true) {
    webcam.read();
  }

  scale(2.25, 1.875);

  background(0);


  for (int i = 0; i < balls.size(); i++) {
    Ball b = (Ball) balls.get(i); 
    b.display();
    b.move();
    if (b.pos.y<=0 ||b.pos.y>=height) {
      balls.remove(i);
    }
  }
  if (millis() > lastTimeCheck + timeIntervalFlag) {
    fall = true;
  }

  if (fall == true) {

    for (int j = 10; j <= width; j = j+10) {
      if (balls.size()<bMax) { 
        balls.add( new Ball(j, 0));
      }
    }

    lastTimeCheck = millis();
    fall = false;
  }
}

class Ball {
  PVector pos;
  PVector vel;
  float rad;

  Ball(int x0, int y0) {
    pos = new PVector(x0, y0);
    vel = new PVector(0, 1);
    rad = 10;
  }

  //boolean detectCollision(PImage cam) {
  //  color slice = cam.get((int)(pos.x), (int)(pos.y));
  //  return(brightness(slice)<=128);
  //}

  boolean detectCollision(PImage cam) {
    color slice = cam.get((int)(pos.x), (int)(pos.y));

    float r = red(slice);
    float g = green(slice);
    float b = blue(slice);

    return((r+g+b)<=400);
  }

  void display() {
    noStroke();
    if (pos.y<height/7) {
      fill(230, 100, 150);
    } else if (pos.y<height/4 && pos.y>height/7) {
      fill(50, 170, 170);
    } else if ( pos.y>height/4) {
      fill(30, 70, 70);
    }

    ellipse(pos.x, pos.y, rad, rad);
  }

  void move() {
    if (detectCollision(webcam)) {
      pos.sub(vel);
      pos.sub(vel);
    }
    vel.y+=0.1;
    pos.add(vel);
  }
}

Final Project “Empower Me” by Yuhan

This post documents the whole process of how I made my “Empower Me” postcard generator. Empower Me is designed to allow non-artist users to create an artistic personalized postcard within one click. More detailed introduction regarding behind thoughts, rationales, and motivation can be found:

Proposal: http://ima.nyu.sh/documentation/2018/02/20/final-project-proposal-36/
Manifesto: http://ima.nyu.sh/documentation/2018/03/12/manifesto-writing-by-yuhan/


The following documentation contains :

I. Generated Personalized Postcard Examples
II. individual sessions for 8 generative arts visual functions
III. Integrate all codes together

Note: the inserted code in the individual sessions is selected from the whole sketch coding and is only for illustration purpose of the previous paragraph(s). So they can’t work on their own. Please refer to the link to my shared Google folder below or at the end to access and download all the code files, if you are interested 🙂

Link to the shared Google folder of code files:
https://drive.google.com/drive/folders/12rireUTbUMWhf14tQpJ1Abma32SNAXfG?usp=sharing

I. Generated Personalized Postcard Example

I asked my friends to give me three words that empower them and try my final project. The following images are the generated personalized postcards for them. But they are just a few possible outputs. The visual outputs that “Empower Me” could generate is much more than this.

II. Individual Sessions of 8 Generative Arts Visual Templates

Visual Template# 1: To Claude Monet

The first case “To Claude Monet” was inspired by one in-class coding example, which uses ellipses to manipulate images.

 

It reminds me the great artist Claude Monet, who is worldwide famous for his artwork Water Lilies. I saw one in MOMA and another one in the MET. Monet uses fast brush strokes to capture the light, which is completely different from Realism artists before. When I was in the MET collection of Monet’s artworks, I approached them as rectangles or points.  So I was thinking whether I could recreate Monet’s work.

https://www.moma.org/collection/works/80220

The first problem I met is how to constrain the brush strokes in the limited space. Otherwise, it would ruin the postcard. I first adjusted the coordinate values of points of PVentor function in the Brush Class. It turned out failed, the image is still messy.Then I found the stupidest but also the smartest way. I created a cover made by 4 rectangles to reframe the visual.  And it looks nice.

void draw(){
 base.loadPixels();
 for(int i=0;i<brush.length;i++){
 brush[i].run();
 }
 fill(255);
 noStroke();
 rect(0,0,width,40);
 rect(0,40,55,580);
 rect(567,40,55,580);
 rect(0,620,width,height-610);

......
}

 

Visual Template# 2: Thinking

Last Wednesday, when I finally got my textbook “Generative Arts”, I couldn’t wait to read. And I encountered the following image by Marius Watz.My first interpretation of this image is it has lines rotated with random length and use curves to connect. “It’s not difficult and doable! Let me try in processing to create!” I thought to myself.

And I did exactly what I thought before. The sketch always draws two random lines together and connects them with the Bezier curve. The coding didn’t take long.  You might be curious about why I make it draw 13 times, namely, 26 lines in total. I did play with the number. And 13 turns out to have the best visual result, which is neither over-simple or overwhelming.

......

if (i<13) {
 i+=1;
 translate(width/2, 40+290);
 strokeWeight(random(0.5, 6));
 stroke(random(0, 255));//,random(0,255),random(0,255));
 float x=random(-245, 245);
 float y=random(-280, 280);
 float A=random(-245, 245);
 float B=random(-280, 280);
 line(0, 0, x, y); 
 line(0, 0, A, B);
 noFill();
 float curve=random(0, 90);
 float curve2=random(0, 90);
 float curve3=random(0, 90);
 float curve4=random(0, 90);
 bezier(x, y, curve, curve2, curve3, curve4, A, B);
}

......

Two possible output:

Visual Template#3: Nature

Once, I was told by my colleague, who is a graphic designer, that nature is the best teacher for coloring. And the fabulous Panton Inc. who is famous for its color match system, also continuously gets inspiration from nature.


So my third visual relates to nature as well. Basically, you can upload any images you like, especially the one whose color you like. My codes work as picking the color of few pixels and apply that to color fill the rectangles.

class Rect{
float x,y,w,h;
PVector location;
Rect(){
location = new PVector(random(37.5,width-37.5),random(40,620));
}
 
void display(){
color c = swimmingpool.get(int(location.x),int(location.y));
stroke(c,50);
fill(c,random(100,200));
strokeWeight(random(0.5,3));
pushMatrix();
translate(location.x, location.y);
rotate(random(PI/8,PI));
rectMode(CENTER);
rect(x,y,width,height/4);
popMatrix();
}

......

The original image and two possible output:

Visual Template#4: Accumulation

The fourth case is a modified version of one of my weekly assignment before. It is recreated based on a poster series “World Design Captial Tapei”.

The original poster and my recreated version:
 

The trick here is how you constrain one the ellipses in the cycle space. The solution is actually primary school math.  Just use sin and cos function!

......

if (i_5<70) {
 i_5 +=1;
 ellipseMode(CENTER);
 noStroke();
 fill(0,random(0, 255), 0, random(85,100));
 a_5=random(37.5, width-37.5);
 b_5=random(40, 620);
 r_5=random(5, 25);
 ellipse(a_5, b_5, r_5, r_5);
 stroke(random(0, 255), random(0, 255), random(0, 255),random(900,1000));
 strokeWeight(2);
 angle_5 +=random(angle_5);
 float radius_5 = random(40, 100);
 float x_5 = a_5+radius_5 * cos(angle_5);
 float y_5 = b_5+radius_5 *sin(angle_5);
 line(a_5, b_5, x_5, y_5);
 }

......

For my final project, I change the color tone to blue on purpose, since it would make the visual result more harmonious.

One possible output:

Visual Template#5: Simplicity

The fifth case recreates the book cover of “Interaction of Color” by Josef Albers, who is an abstract painter utilize geometric forms. 

I made Processing draw 8 rectangles with different red colors and the space between each two as well as its position has a slight difference.

......

int i_6 =0;
 if (i_6 <8) {
 if (y_6 <=520) {
 pushMatrix();
 i_6 +=1;
 translate(width/2, 80);
 noStroke();
 fill(random(100, 160), 0,0,random(200,300));
 rectMode(CENTER);
 rotate(random(0, PI/80));
 float w = random(300, 380);
 rect(x_6, y_6, w, 70);
 float gap = random(10, 120);
 y_6 += gap*i_6;
 popMatrix();
 }

......

I tried both blue color tone and red ones. The red one looks nicer.  Also, I played around with the positioning. Staying relative in the middle, not too left or too right gives the best result.

In one user testing session, this simple visual template is surprisingly favorable by people.

Visual Template# 6: Femininity

This visual result was inspired by “Wave Clock” by Matt Pearson.

My first impression of this work is not a clock but bloom. When I looked into Pearson’s detailed code explanation, I know that he is using the Perlin noise function, which we also learned in class. I modified his “noise” code and create my own Sakura.

float x_6_1, y_6_1;
float radius_6 = 200;
float radiusNoise = random(10);
......

pushMatrix();
 translate(width/2, 330);
 strokeWeight(0.1);
 stroke(20, 50, 70);
 if (ang<=1440) {
 ang +=0.5;
 radiusNoise +=0.05;
 float thisRadius=radius_6+(noise(radiusNoise)*200)-100;
 float rad = radians(ang);
 x_6_1 = thisRadius*cos(rad);
 y_6_1 =thisRadius*sin(rad);
 stroke(232, 146, 164, random(120, 900));
 line (0, 0, x_6_1, y_6_1);
 }
 popMatrix();

......

Basically, there are 2880 lines in the whole process, with the same startpoint but different endpoints, due to the noise function. The reason why I choose if instead of for loop is because I personally value the process of creating a generative art more than the result.

Make it more understandable.  You could also consider it as connect two points, A and B. Point A would always be (0,0). As for point B, first, think about how you draw a cycle by using dots. You would code the x,y position value as the following:

x = centX+cos(ang)*radium;
y = centY+cos(ang)*radium;

If the radium is a constant, you would get a perfect cycle. Now we add noise function to it. The radium is not constant anymore but keeps changing.

float thisRadius=radius_6+(noise(radiusNoise)*200)-100;
 float rad = radians(ang);
x_6_1 = thisRadius*cos(rad);
y_6_1 =thisRadius*sin(rad);

One possible output:

This color turns to work the best. It has enough variation, and the color per se is not either too strong or too light, which is quite.

Visual Template#: (Dis)Organized

When I was out of ideas and looked through on the internet, I found the following image.

It even seems like a modern version of Jackson Pollock. It might because I visited Pollock’s artwork too many times in New York. I like the above image a lot and really want to recreate it. The ellipses are easy to create, while how to make the chaotic but beautiful background?

It reminds the spiral code with noise function.  Matt Person, in his book  “Generative Arts” demonstrates the result of draw 100 spirals with noise. It looks like below:

May I based on this to achieve the messy but beautiful background I want? So I tried to change the number is Pearson’s code again and again. Finally, I got a relative similar one.

if (number_7<1) {
 strokeWeight(0.5);
 smooth();
 for (int i_7 =0; i_7<10; i_7++) {
 pushMatrix();
 translate(120, 120);
 float lastx_7 = -999;
 float lasty_7 = -999;
 float radiusNoise_7 = random(10);
 float radius_7=60;
 stroke(random(20), random(50), random(70), 50);
 int startangle_7 = int(random(360));
 int endangle_7 = 1440+int(random(80));
 int anglestep_7= 5 +int(random(3));
 for (float ang_7 = startangle_7; ang_7 <=endangle_7; ang_7 +=anglestep_7) {
 radiusNoise_7 +=0.05;
 radius_7 +=0.5;
 float thisRadius_7=radius_7 +(noise(radiusNoise_7)*200)-100;
 float rad_7 = radians(ang_7);
 centX_7=random(50, 300);
 centY_7=random(100, 300);
 x_7 = centX_7+(thisRadius_7 * cos(rad_7));
 y_7 = centY_7+(thisRadius_7 * sin(rad_7+PI/8));

if (lastx_7>-999) {
 line(x_7, y_7, lastx_7, lasty_7);
 }
 lastx_7 =x_7;
 lasty_7=y_7;
 }
 popMatrix();
 }

for (int i_2 =0; i_2 <30; i_2++) {
 fill(0, random(30, 300));
 float R=random(3, 30);
 ellipse(random(37.5, width-37.5), random(40, 620), R, R);
 }
 number_7 +=1;
 }

Two possible output:

Visual Template#8: Spot Light

The last visual also is also based on one image I found on the Internet.

How I perceive this image is that it has a bundle of lines and rotate it to compose a cycle; then rotate the whole cycle for multiple times. Seems very doable. So I follow my initial idea and achieve a very similar result. But I changed the color since I feel pure black feels depressed.

 case 8:
 if (i_8<=10) {
 i_8++;
 pushMatrix();
 translate(width/2, 330);
 float radius_8 = random(80, 200);
 float aci=2*PI;
 rotate(random(PI));
 for (float Ang_8=0; Ang_8<=360; Ang_8 +=30) {
 stroke(89, random(112, 140), 170, 90);
 //Pink Color Tone
 //Stroke(220,115,130,90)
 strokeWeight(random(0.05, 4));
 rotate(random(aci/6, aci/12));
 for (float ang_8 =0; ang_8<=15; ang_8 +=1.5) {
 float x, y, x_2, y_2;
 float rad=radians(ang_8);
 x=(radius_8*cos(rad));
 y=(radius_8*sin(rad));
 float radius_2 =random(0, 200);//random(100,150);
 x_2=x+(radius_2*cos(rad));
 y_2=y+(radius_2*sin(rad));
 line(x, y, x_2, y_2);
 }
 }
 popMatrix();
 }
 break;

One possible output:

III. Integrate All Codes Together 

#Think Consistently

Since I coded separately for each visual function as the postcard templates, when integrated all function together, I have to rename all the variable names. The easiest way is inserting numbers into it.  For example:

//variable of Case 4
float r, r2, r3;
float x_3_1, x_3_2, x_3_3, x_3_4;
float y_3_1, y_3_2, y_3_3, y_3_4;
float angle, angle2, angle3;
float radius, radius3;
float number_4=0;
float i_4=0;

And I need to check all the sizes and make each template coherent.

#Insert Personalize Content

A personalized postcard has to have a personal connection with my users. I want them to be able to insert whatever content they what.

The typing function gets a reference to the “typing example” we learned in week 3. The problem I met here is how to center the text. Initially, the typing starts the top-left corner, while I prefer it to always be in the middle. So I set up two floats to ensure the position would always be in the middle.

......
 
 float a=width/2;
 float b=720;
 line(a+cursorPosition/2,b-40,a+cursorPosition/2,b+25);
 textAlign(CENTER);
 fill(0);
 font = createFont("BaskOldFace",10);
 textFont(font, 60);
 text(letters,a,b);

.......

void keyPressed() {
if (key == BACKSPACE) {
 if (letters. length()>0) {
 letters = letters. substring(0, letters. length()-1);
 }
 } else if (textWidth(letters+key)<width) {
 letters = letters +key;
 }
}

#Switch

There are a builtin switch and case functions in Processing, which allows you to have different individual functions to switch to. The structure of case function looks like below:

switch(){
case 1:
......
break;

case 2:
.....
break;
}

Once I forget to include the break. My sketch went crazy and run all the functions together, like this:

Afterward, as long as include both cases and break, it works perfectly.

Only the switch and case function cannot make the switching yet. I need other functions to do the switch. It occurred to me the keypressed function. Initially, I thought about using numbers to call on individual visual functions. Unfortunately, because I also have the typing function which records each time when a key is pressed. So it would show the unexpected number as well.

Thus, I turned to use arrow keys to call on the visual functions.

if (keyCode == UP) {
 if (drawMode >1) {
 drawMode -= 1;
 } else {
 drawMode =8;
 y_6 = 40;
 }

......

else if (keyCode == DOWN) {
 if (drawMode <8) {
 drawMode += 1;
 } else {
 drawMode =1;
 }

......

One issue worth to point out is since I included if condition multiple times to limit the number of shapes shown. After the first time showing the result, the if condition would not be satisfied anymore, so the templates become blank. My learning point here is when using the switch and case function, don’t forget to reset all the variable back to the initial setting. Namely, include what you originally have in setup session in the codes for calling on functions as well. For example:

if (keyCode == UP) {
 //background(255);
 if (drawMode >1) {
 drawMode -= 1;
 } else {
 drawMode =8;
 y_6 = 40;
 }
 if (drawMode == 2) {
 background(255);
 i = 0;
 } else if (drawMode ==1) {
 base = loadImage ("Lotus.jpg");
 base.resize(500, 620);
 image(base, 37.5, 40);
 } else if (drawMode ==3) {
 swimmingpool = loadImage ("Hockney.jpg");
 swimmingpool.resize(500, 580);
 image(swimmingpool, 37.5, 40);
 } else if (drawMode ==4) {
 background(255);
 i_4=0;
 } else if (drawMode == 5) {
 background(255);
 y_6 = 40;
 } else if (drawMode ==6) {
 background(255);
 ang=0;
 } else if (drawMode ==7) {
 background(255);
 centX_7 =250;
 centY_7 =150;
 number_7=0;
 } else if (drawMode ==8) {
 background(255);
 i_8=0;
 }
 }

#Refresh the Image

When I managed to switch templates and type, it occurred to me that since all the visual are generative arts, I need to allow my users to refresh the image and get a new visual output, if they like the template but not quite satisfied with the current output.

So I use the LEFT/RIGHT arrow key for my users to get new visual outputs. Also, I need to reset all the variables back to the initial setting as well.

if (keyCode == LEFT|| keyCode == RIGHT) {
 if (drawMode == 2) {
 background(255);
 i = 0;
 } else if (drawMode ==1) {
 base = loadImage ("Lotus.jpg");
 base.resize(500, 620);
 image(base, 37.5, 40);
 } else if (drawMode ==4) {
 background(255);
 i_4=0;
 } else if (drawMode ==5) {
 background(255);
 y_6 = 40;
 } else if (drawMode ==6) {
 background(255);
 ang=0;
 } else if (drawMode ==7) {
 background(255);
 centX_7 =250;
 centY_7 =150;
 number_7=0;
 } else if (drawMode ==8) {
 background(255);
 i_8=0;
 }
 }

 

Thanks for your interest in my project. If you want to access and download all the code files, please refer to the link below. All the above codes are selected and not completed

Link to the shared Google folder of code files:
https://drive.google.com/drive/folders/12rireUTbUMWhf14tQpJ1Abma32SNAXfG?usp=sharing