Kinetic Interfaces – week 12 OSC weekly Assignment (Echo)

For OSC weekly assignment, I created a synchronised drawing board. The sender can be used to draw colourful lines and the drawing will be displayed on the receiver’s screen at the same time. Here is the demo:

demo1

 

import oscP5.*;
import netP5.*;


OscP5 oscP5;
NetAddress myRemoteLocation;


void setup() {
  size(400,400);
  background(100);
  
  oscP5 = new OscP5(this,12000);
  myRemoteLocation = new NetAddress("127.0.0.1",12001);
}


void draw() {
  //
}


void mouseDragged() {
  OscMessage msg = new OscMessage("/Echo");
   
  // let's send mouseX&Y positions
  float x = map(mouseX,0,width,0,1);
  float y = map(mouseY,0,height,0,1);
  float X = map(pmouseX,0,width,0,1);
  float Y = map(pmouseY,0,width,0,1);
 
  msg.add( x );
  msg.add( y );
  msg.add( X );
  msg.add( Y );

  oscP5.send( msg, myRemoteLocation );
  
  stroke(random(255),random(255),random(255));
  line(mouseX, mouseY, pmouseX, pmouseY);
}
import oscP5.*;
import netP5.*;


OscP5 oscP5;

int x,y,X,Y;
float xValue = 0;
float yValue = 0;
float XValue = 0;
float YValue = 0;

void setup() {
  size(800, 500);
  background(100);

  oscP5 = new OscP5(this, 12001);
}


void draw() {
  int x =(int)map(xValue,0,1,0,width);
  int y =(int)map(yValue,0,1,0,height);
  int X =(int)map(XValue,0,1,0,width);
  int Y =(int)map(YValue,0,1,0,height);
  stroke(random(255),random(255),random(255));
  line(x,y,X,Y);
}

void oscEvent(OscMessage msg) {
  println("___");
  println("Pattern: " + msg.addrPattern() );
  println("Typetag: " + msg.typetag() );
  println();

  xValue = msg.get(0).floatValue();
  yValue = msg.get(1).floatValue();
  XValue = msg.get(2).floatValue();
  YValue = msg.get(3).floatValue();
  println("x" + xValue);
  println("y" + yValue);
  println("X" + XValue);
  println("Y" + YValue);
  println();
  }

Kinetic Interfaces: Week 12 – OSC communication(Cyndi)

In addition to what I’ve learned in class, for this assignment, I tried to send all the data from the sender to the receiver to let them draw rectangles simultaneously. I got my own IP address from the receiver and paste it to the sender code, then I sent the x,y position and also the size of the rectangle to the receiver. It turned out that while I am drawing a rectangle in the sender canvas, the receiver canvas is drawing one simultaneously, and also reports the x,y position and the size of the rectangle.

Also, I learned that the procedure is to open the receiver canvas first, then open the sender canvas, then click the receiver canvas again. It is also necessary to make sure that there is only two canvas being opened at the same time, or the data will be lost.

Here is the demo:

The code:

Kinetic Interfaces: UE4 Kinect Third Person Shooter Blueprint | Kevin Xu and Jiayi Wang (Moon)

When we first presented the project we had our eyes set on creating a fighter game. Along the process of creating the character and the controls for movement, we discovered this would become a very hard thing to do because the body parts did not operate on a vector basis, but rather by absolute position. This would mean that you could block a punch with your arm but as long as the other player kept swinging the arm would eventually go through your own and hit your torso or head anyway. Because of this collision problem, we found the best way to work around it was to turn it into a shooter game instead. This, however, had its own set of problems.

Because of the way the Kinect works, it is nearly impossible to track skeletons by their individual identification. In order to keep a decent amount of consistency, instead of searching for specific ID’s we simply had a blueprint that would search for the first and second index from a function “getAllBodies” which ID’s all skeletons in view of the Kinect. This would allow us to track 2 players no matter which skeleton number they were assigned.

Because each player needs to have their own pawn with a blueprint finding a specific index (either 0 for player 1 or 1 for player 2) we needed the two players to have different blueprints. This is what caused our second biggest problem. By default, Unreal Engine 4 only allows for one single pawn to be the default character that players spawn as. Whereas changing the amount of players allowed in the scene and setting up the split screen turned out to be very simple, allowing the two players to control their individually blueprinted characters is where we were met with difficulty. In order to override this, we would need to use C++ to code for a second player ID, and though we tried, our joint inexperience in C++ rendered our attempts futile. We ended up with a working blueprint for a single shooter, however, using a lot of reference from the base blueprint for first person shooting.

As a last second change, I wanted to create a demo video game by simply creating an ammo system and having enemies run at you but it turned out a bit incomplete because I couldn’t figure out which type of collision the bullet vs enemy was registering as (overlap, block, etc). It ended bouncing off the enemies instead of killing them which was somewhat comical but not what I expected. Though we did not get to do as much with the character as we wanted to because of certain limitations it was definitely a good experience for me as I got to learn a lot about how Unreal Engine 4 operates and I think it’s given me many ideas for the possible future of this blueprint and also for futures projects that may involve using this engine.

【Kinetic Interface】Kinect for UE4 TPS Blueprint (Kevin Xu and Jiayi Wang)

Final Preview

Build with UE4.17.2. Upward compatible until 4.18. Kinect plugin is required for the project which can be found here. The Kinect plugin is not compatible with 4.19 and any newer version by May 2018.

Project download link: https://drive.google.com/file/d/19AIs7NBNx7OkzGriS_T02x3lrI_zw4wf/view?usp=sharing

The final version of the blueprint is named as 3rdpersonKinect in the root directory.

The final product is a drag-and-drop ready character controller for all developer to use. The pawn comes with a local motion which means that the mesh actually follows the user around with responding animations. The Blueprint is constructed with a following camera and a gun that points at the direction of the player’s right hand.

The development of this project started with the “Kinect poseable mesh” BP that comes with the Kinect plugins. This blueprint consists of a mesh that will follow the relative movement of the rig. Which means that the mesh will not move around but rather stays at one point and do all the movement. Apart from this, since the original idea of the project is to make a multiplayer game based on Kinect. It’s necessary to assign the correct body number (body Enum as the variable in the blueprint)

As it’s recorded in the video above, the number is randomly generated and there’s no way to control that. As a result. we assigned whichever number that gets detected first to one body and the second to the other. Thus solved the random rig number problem.

As shown in the video, we now have to mesh that animate according to the two players. It will not mess up the animation between the two players. However, there’s still no local motion.

There’s a lot more calibration needed to the blueprint in order to improve the user experience. Since none of us have the experience with using C++ so it was really a struggle to construct everything by only using the node editor. If we had more time, we would definitely want to add functions such as reloading, teleporting (like VR pawn) or even combine this with VR.

【Kinetic Interface】Final Proposal (Kevin Xu and Jiayi Wang)

Presentation Slides:https://docs.google.com/presentation/d/1ai50ob6ezz_TZQRzUVHuS1AfxMMMpVi5pcNnVHDsvxo/edit?usp=sharing

For the final project, we want to make a sidescroller style 2 player fighting game with the Kinect as the controller. Different from traditional fighting games like Street Fighter that has a lot of combo attacks, we want our game to stick with reality. Which means the characters will attack each other in the way that the players actually move in reality. The difficulty of this project would probably be the collection-detection and damage based on that.

Kinetic Interfaces Final Project: Fish Artists (Oli and Harrison)

Project: Fish Artists

Partner: Sihao Chen

Date: 13 May

Inspiration

Our project, which was inspired by our midterm project, involved using the silhouettes of live fish with processing to create a real time visual painting in traditional Chinese style.  For our midterm project we used leap motion to create a pond ecosystem simulation. The user would use certain gestures and their hands position to interact with a fish in the pond. Combined with visual animations made in processing the quality that most stood out was the simple aesthetic of the fish and their motion. We wanted to carry on the idea of using fish to create visuals. So we looked into other interactive media projects that maybe used live fish as a foundation for their projects. From the research, we found a Japanese artist (at the gold fish festival in Japan)who used projection mapping and living fish to create some very cool visuals.

Initial Plan

At the beginning the initial idea was to use the Kinect to get the x,y positions of the fish and then projection map through the fish tank so that we could use the positions and the shadows to create some really cool visuals. However the Kinect ended up being rather problematic as it couldn’t pick up the positions and given the reflection of the tank there was also a lot of noise interference.

So instead we resorted to using the webcam and color tracking to get the silhouettes of the fish. AT We also used the max X/Y and minX/Y values of the fish’s positions to create an average position of the fish. We would then use this to attach animations to the fish. However this also proved to be problematic the fishes index kept changing whilst applying the animation, making it look very jolty.

Video Projecting through the tank 

 

Another issue that was raised was the brightness of the projector through the fish tank. Given the brightness it would be too bright for the fish so we were had to abandon the idea of projection mapping through the tank.

Given the indexing errors we trying to think of other ways that we could create interesting visuals without needing to identify each fish. Then we thought of adding a slow trailing effect. We realized that given the blackness of the silhouettes it would create a very nice contrast over a white background, when we actually implemented the fading trail effect, the silhouettes looked very much like the brush strokes of traditional Chinese water color painting.

For the IMA show we then aimed to fine tune this aesthetic. First we looked for potential paintings that matched the style of our fish, then we used photoshop to digitally edit the fish out of the photos. The image would then be the background in processing. When implemented this worked really well, it looked almost as if the fish were moving about on the page in real-time

.

IMG_2224

import processing.video.*;
Capture video;
int blobCounter = 0;
int maxLife = 200;
color trackColor; 
float threshold = 40;
float distThreshold = 50;
ArrayList<Blob> blobs = new ArrayList<Blob>();

void setup() {
  size(640, 360);
  String[] cameras = Capture.list();
  printArray(cameras);
  video = new Capture(this, cameras[41]);
  video.start();
  // 183.0 12.0 83.0
  trackColor = color(183, 12, 83);
}

void captureEvent(Capture video) {
  video.read();
}

void keyPressed() {
  if (key == 'a') {
    distThreshold+=5;
  } else if (key == 'z') {
    distThreshold-=5;
  }
  if (key == 's') {
    threshold+=5;
  } else if (key == 'x') {
    threshold-=5;
  }
}

void draw() {
  video.loadPixels();
  image(video, 0, 0);

  ArrayList<Blob> currentBlobs = new ArrayList<Blob>();

  // Begin loop to walk through every pixel
  for (int x = 0; x < video.width; x++ ) {
    for (int y = 0; y < video.height; y++ ) {
      int loc = x + y * video.width;
      // What is current color
      color currentColor = video.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);

      float d = distSq(r1, g1, b1, r2, g2, b2); 

      if (d < threshold*threshold) {

        boolean found = false;
        for (Blob b : currentBlobs) {
          if (b.isNear(x, y)) {
            b.add(x, y);
            found = true;
            break;
          }
        }

        if (!found) {
          Blob b = new Blob(x, y);
          currentBlobs.add(b);
        }
      }
    }
  }

  for (int i = currentBlobs.size()-1; i >= 0; i--) {
    if (currentBlobs.get(i).size() < 500) {
      currentBlobs.remove(i);
    }
  }

  // There are no blobs!
  if (blobs.isEmpty() && currentBlobs.size() > 0) {
    println("Adding blobs!");
    for (Blob b : currentBlobs) {
      b.id = blobCounter;
      blobs.add(b);
      blobCounter++;
    }
  } else if (blobs.size() <= currentBlobs.size()) {
    // Match whatever blobs you can match
    for (Blob b : blobs) {
      float recordD = 1000;
      Blob matched = null;
      for (Blob cb : currentBlobs) {
        PVector centerB = b.getCenter();
        PVector centerCB = cb.getCenter();         
        float d = PVector.dist(centerB, centerCB);
        if (d < recordD && !cb.taken) {
          recordD = d; 
          matched = cb;
        }
      }
      matched.taken = true;
      b.become(matched);
    }

    // Whatever is leftover make new blobs
    for (Blob b : currentBlobs) {
      if (!b.taken) {
        b.id = blobCounter;
        blobs.add(b);
        blobCounter++;
      }
    }
  } else if (blobs.size() > currentBlobs.size()) {
    for (Blob b : blobs) {
      b.taken = false;
    }


    // Match whatever blobs you can match
    for (Blob cb : currentBlobs) {
      float recordD = 1000;
      Blob matched = null;
      for (Blob b : blobs) {
        PVector centerB = b.getCenter();
        PVector centerCB = cb.getCenter();         
        float d = PVector.dist(centerB, centerCB);
        if (d < recordD && !b.taken) {
          recordD = d; 
          matched = b;
        }
      }
      if (matched != null) {
        matched.taken = true;
        matched.become(cb);
      }
    }

    for (int i = blobs.size() - 1; i >= 0; i--) {
      Blob b = blobs.get(i);
      if (!b.taken) {
        blobs.remove(i);
      }
    }
  }

  for (Blob b : blobs) {
    b.show();
  } 



  textAlign(RIGHT);
  fill(0);
  //text(currentBlobs.size(), width-10, 40);
  //text(blobs.size(), width-10, 80);
  textSize(24);
  text("color threshold: " + threshold, width-10, 50);  
  text("distance threshold: " + distThreshold, width-10, 25);
}


float distSq(float x1, float y1, float x2, float y2) {
  float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1);
  return d;
}


float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
  float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
  return d;
}

void mousePressed() {
  // Save color where the mouse is clicked in trackColor variable
  int loc = mouseX + mouseY*video.width;
  trackColor = video.pixels[loc];
  println(red(trackColor), green(trackColor), blue(trackColor));
}