Talking Fabrics|Field Trip 2|Asiya Gubaydullina

Today we went to a different fabrics market and it was quite an interesting experience since we all had an idea for the final project and could look for the material we wanted to use.

IMG_1773IMG_1774

Or group found a few pieces we thought would work out and asked for samples to experiment with. It is a pretty hard material to get so it took us a while. Overall, we got 3 samples and it is definitely a success. This market is pretty big and definitely has more fabrics choices than the one we went before.

Interaction Lab|Lab 10|Asiya Gubaydullina

Section: Antonius

For this lab we were supposed to work either with video or opencv libraries in processing. I chose to work with video library and try to “remix” mirror example. It mirrors the image received with cam in small rotating squares. I changed squares into circles, and proceeded to the color choices. I first had troubles with changing the color theme of the mirrored image but once I tried to tweak the colors I ended up with black screen. Of course, Antonius came to rescue and explain what I am doing wrong. Turns out, I can’t really change the color scheme completely but rather turn on and off red, green or blue channels. Later on, Antonius suggested to add the turn off/on feature by pressing buttons, which was done in a time span of 10 minutes right before the class ended. The results are following!

The code is super super long.

/**
* Mirror
* by Daniel Shiffman.
*
* Each pixel from the video source is drawn as a rectangle with rotation based on brightness.
*/

import processing.video.*;
// Size of each cell in the grid
int cellSize = 20;
// Number of columns and rows in our system
int cols, rows;
// Variable for capture device
Capture video;
void setup() {
size(640, 480);
frameRate(30);
cols = width / cellSize;
rows = height / cellSize;
colorMode(RGB, 255, 255, 255, 100);

// This the default video input, see the GettingStartedCapture
// example if it creates an error
video = new Capture(this, width, height);

// Start capturing the images from the camera
video.start();

background(0);
}
void draw() {
if (video.available()) {
video.read();
video.loadPixels();

// Begin loop for columns
for (int i = 0; i < cols; i++) {
// Begin loop for rows
for (int j = 0; j < rows; j++) {

// Where are we, pixel-wise?
int x = i*cellSize;
int y = j*cellSize;
int loc = (video.width – x – 1) + y*video.width; // Reversing x to mirror the image
float r = red(video.pixels[loc]);
float g = green(video.pixels[loc]);
float b = blue(video.pixels[loc]);

// Make a new color with an alpha component
color c = color(r, g, b, 75);

// Code for drawing a single rect
// Using translate in order for rotation to work properly
pushMatrix();
translate(x+cellSize/2, y+cellSize/2);
// Rotation formula based on brightness
rotate((2 * PI * brightness(c) / 255.0));
ellipseMode(CENTER);
fill(c);
noStroke();
// Rects are larger than the cell for some overlap
ellipse(0, 0, cellSize+6, cellSize+6);
popMatrix();
}
}
}
if(keyPressed) {
if (key==’p’||key==’P’){
for (int i = 0; i < cols; i++) {
// Begin loop for rows
for (int j = 0; j < rows; j++) {

// Where are we, pixel-wise?
int x = i*cellSize;
int y = j*cellSize;
int loc = (video.width – x – 1) + y*video.width; // Reversing x to mirror the image
float r = red(video.pixels[loc]);
float g = 0;
float b = blue(video.pixels[loc]);

// Make a new color with an alpha component
color c = color(r, g, b, 75);

// Code for drawing a single rect
// Using translate in order for rotation to work properly
pushMatrix();
translate(x+cellSize/2, y+cellSize/2);
// Rotation formula based on brightness
rotate((2 * PI * brightness(c) / 255.0));
ellipseMode(CENTER);
fill(c);
noStroke();
// Rects are larger than the cell for some overlap
ellipse(0, 0, cellSize+6, cellSize+6);
popMatrix();
}
}
}
if(keyPressed) {
if (key==’g’||key==’G’){
for (int i = 0; i < cols; i++) {
// Begin loop for rows
for (int j = 0; j < rows; j++) {

// Where are we, pixel-wise?
int x = i*cellSize;
int y = j*cellSize;
int loc = (video.width – x – 1) + y*video.width; // Reversing x to mirror the image
float r = 0;
float g = green(video.pixels[loc]);
float b = 0;

// Make a new color with an alpha component
color c = color(r, g, b, 75);

// Code for drawing a single rect
// Using translate in order for rotation to work properly
pushMatrix();
translate(x+cellSize/2, y+cellSize/2);
// Rotation formula based on brightness
rotate((2 * PI * brightness(c) / 255.0));
ellipseMode(CENTER);
fill(c);
noStroke();
// Rects are larger than the cell for some overlap
ellipse(0, 0, cellSize+6, cellSize+6);
popMatrix();
}

}}}}}}

 

Response to History of Internet Art

Reading this made me realise of how far the internet has come as a medium, of communication and now, of art. We’re now blessed with more robust and user-friendly markup and style sheet language, giving us more space to tweak and play around with.

I really enjoy this reading as this allow me to learn about the internet artwork in the past. To me, it’s amazing that we’ve gone from using only hyperlinks (Heath Bunting piece) to something as interactive as Johny Cash project. However, one interesting thing I noticed is that some of the artists discussed, such as Golan Levin, isn’t just an internet artist, but is also a developer/programmer whose work is curated on Processing’s default library. It’s interesting to see that after all, there is a way to connect technology, science, internet with art, eventually enabling programmers to also be artists at the same, and vice versa. And I guess, if I can stretch this a bit, it shows that if we are curious enough, we can put together two or more things that seem contradictory at first to create something fresh and inspiring.

User Experience Design: POP App Design

For this assignment, I used the POP app to design a app prototype for the teachUNICEF website. What I had noticed immediately upon opening the website is the trouble a user might have navigating the map and categories. The connection between the categories and the pins on the map not immediately clear. After understanding that the pins on the map are referencing the various categories within different countries, the user might try to click on any pin to read about a certain topic (say, education), in a certain country. These pins, however, do not seem to bring the user anywhere and are quite stationary. Thus, I tried to address this problem in my app prototype.

I started off with an opening page, giving the user an option to browse by map or by category. Should the user choose to browse by map, the map would pop up with all the pins stretched across it. The map option is helpful for users who would like to research about many topics within a country of their interest. The user can then click on the country they want on the map, which will bring them to a page that gives specific information about said country and the various topics available in that country. The user may then click on a category that interests them within this page, bringing them to a page exclusively about the category in that country (for example education in Nigeria).

On this page, the user may return to either the map or the newly unseen page that lists out all categories across the globe. The category page can be accessed from the “specific category in the country” page as well as the beginning home page. This page is helpful for users who want to learn about a specific category on interest across different countries. When the user chooses a category, they will reach a page with information about the category across countries and a map that shows the pins of this category in the various countries. Upon clicking one of these pins, the user may then go to the page that has information exclusively about this category in this country.

Here’s the link to my prototype!

https://popapp.in/projects/572464aa9bf2c0dc3e7e3fcc/preview

Zeyao Lab 10 Documentation

This week I chose to use Computer Version to make a snapchat filter. At first, I just used a transparent photo to make easy fiter on my face. Then I found the size didn’t fit my face. Then I asked professor for help. I realized I can change the size of the image.

snap,faces[i].x, faces[i].y, faces[i].width, faces[i].height

Change the size on this line’s code.

屏幕快照 2016-04-30 上午10.43.28

 

Then after this basic founction, I wanted to make something more than that. So I decided to let the filter change everytime I moved my face inside camera.  Matt helped me this. We created two variables. “int” and “boolean”.  We numbered the different filters and then use “if” to change the number. Also, in order to not let the function looping, we created another “if” .

 

Here is the code:
import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;
PImage snap, snap1, snap2;
int num;
boolean doOnce = false;

void setup() {
size(640, 480);
video = new Capture(this, 640/2, 480/2);
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();
snap = loadImage(“snapchat1.png”);
num = 0;

//snap = snap1;

}

void draw() {
scale(2);
opencv.loadImage(video);

image(video, 0, 0 );

noFill();
stroke(0, 255, 0);
strokeWeight(1);
Rectangle[] faces = opencv.detect();
println(“—“);
println(faces.length);
println(num);
if(faces.length == 0 && doOnce == false){
num++;
if (num >=4 ) {
num = 1;
}
snap = loadImage(“snapchat”+num+”.png”);
doOnce = true;
}
if (faces.length > 0) {
doOnce = false;
}
for (int i = 0; i < faces.length; i++) {
println(faces[i].x + “,” + faces[i].y);
image(snap,faces[i].x, faces[i].y, faces[i].width, faces[i].height);
}
}

void captureEvent(Capture c) {
c.read();
}

 

Lab 10

For this lab, I decided to use the OpenCV plugin for processing. At first, I googled ‘transparent snapchat filters’ to find the dog filter and see if it would work with processing. I replaced the initial example code with one that supported png, and achieved this result:

Screen Shot 2016-04-29 at 2.14.54 PM

As usual, though, I craved something cooler + more creative. I’m a huge fan of Hugh Hefner, so I searched for cool images of him that I might want to make myself a part of. I found this cool one, and it looks great on me!

Screen Shot 2016-04-29 at 2.57.56 PM

However, I was still disappointed. I wanted him to still be in the image, so I went back into Photoshop & lowered the opacity on my selection of his face, rather than deleting him completely. Here’s that result:

 

Screen Shot 2016-04-29 at 3.02.15 PM

Sarah Wardles Lab 10 Antonius’s Class

First I was planning to incorporate video into this by making a game, but then I thought it would be cool to make my own snapchat filter.

Here is the code I used:

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;
PImage bunny;

void setup() {
size(640, 480);
video = new Capture(this, 640/2, 480/2);
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();
bunny = loadImage(“cattt.png”);
}

void draw() {
scale(2);
opencv.loadImage(video);

image(video, 0, 0 );

noFill();
stroke(0, 255, 0);
strokeWeight(1);
Rectangle[] faces = opencv.detect();
println(faces.length);

for (int i = 0; i < faces.length; i++) {
println(faces[i].x + “,” + faces[i].y);
image(bunny, faces[i].x, faces[i].y, faces[i].width, faces[i].height*.9);
}
}

void captureEvent(Capture c) {
c.read();
}

I used the face detection code. I manipulated it, renamed the image and added a snapchat transparent filter of a bunny. Heres what happened!

IMG_6658

The filter didnt initially fix correctly onto the face so I changed the height of the image so that the nose fit well onto the detected face. Yay!

Lab 10|04.29.16|Jingyi Zhang (Jennifer)|Prof. Daniel Mikesell

PROJECT: I did a “video player” that has basic functions of controlling a video, including playing, speeding up, pausing and stopping.

CODE:

import processing.video.*;

Movie mov;
PImage play,speed,pause,stop;
float s=1;
int xp, yp, xs, ys,xp2,yp2,xs2,ys2;

void setup(){
size(640,480);
background(0);
mov = new Movie(this,”street.mov”);
play = loadImage(“play.jpg”);
speed = loadImage(“speed.jpg”);
pause = loadImage(“pause.jpg”);
stop = loadImage(“stop.jpg”);
mov.play();
mov.jump(0);
mov.pause();
}

void draw(){
xp=yp=ys=ys2=yp2=40;
xs=70;xs2=100;xp2=130;
mov.speed(s);
image(mov,0,0);
image(play,xp,yp,20,20);
image(speed,xs,ys,20,20);
image(pause,xp2,yp2,20,20);
image(stop,xs2,ys2,20,20);
}

void mousePressed(){
if (mousePressed == true && xp<=mouseX && xp+20>=mouseX && xp<=mouseY && yp+20>=mouseY){
mov.play();
s=1;
}
if (mousePressed == true && xs<=mouseX && xs+20>=mouseX && ys<=mouseY && ys+20>=mouseY){
mov.play();
s=s*1.5;
}
if (mousePressed == true && xp2<=mouseX && xp2+20>=mouseX && yp2<=mouseY && yp2+20>=mouseY){
mov.pause();
}
if (mousePressed == true && xs2<=mouseX && xs2+20>=mouseX && ys2<=mouseY && ys2+20>=mouseY){
mov.jump(0);
mov.pause();
}
}

void movieEvent(Movie mov) {
mov.read();
}

P.S. The video I use is from the examples provided by the Video library.

EFFECT VIDEO:

Lab0429