Talking Fabrics: Touch Project (Phyllis)

Title: Sleepy Rabbit

Target: 3-5-year-old Children

Description: This touch project is designed to simulate the process of a mom taking care of a baby (especially calming children down before going to bed). I hope that by playing around with this sleepy rabbit, children may learn to take care of others just as how mom is taking care of them every day.

Documented by: Phyllis

Documented on: May 15th, 2018

Material: RFID*2, RFID Reader*2, Arduino*2, wires, Toy Rabbit, thread, insulating tape, Bluetooth speaker



Basically, my design of this project is to have the toy first “cry” and then sweetly “fall asleep.” It’s quite common for babies to cry when they want to go to bed. Crying babies need more care, and moms often wipe out their tears. Therefore, I wanted to have an interaction of children wiping tears for their toys as if they are comforting them. When children putting sheets on the toy, it falls asleep, just as how moms calming their babies down at night. After talking with Antonius about my idea, I was suggested using RFID to achieve my goals in this project.

Stage 1: Circuit Building and Testing

I searched for an example code of the RFID and tried to make 2 RFID work at the same time on one Arduino. I spent so much time figuring it out but failed… Below is the code that I tried to work with on basis of the example code that I found on the RFID documentation.

After asking Antonius about it, we tried to figure it out together but failed again… The Arduino was just printing wired numbers/letters that were received from the RFID, which we did not understand at all… and only one of them was working at one time. After trying several times again, Antonius suggested having each RFID work on different Arduinos… Therefore, I broke the circuit into two, and got back to using the original Aruidno code from the RFID documentation.

After directly testing with the sample code, I tried to build serial communication between Arduino and Processing. However, I realized that the way that RFID works is actually really different from what I thought it should be. It turned out that I couldn’t successfully translate what RFID received to processing (I asked Nick for help and we were stuck here for more than a whole day)…

Fortunately, Luis helped me find a processing example code on GitHub for building serial communication with RFID. As is shown in the testing video below, it worked!!! Then I added sound files that I found on freesound (crying sound, snoring sound) in processing.

After making sure that my code was working fine, I tried to make the whole circuit more solid. I used the insulating tape to stabilize the RFID and twined the Arduino with some thread.


Stage 2: Moving the Circuit into the Toy

Since I wanted to hide the circuit inside the toy rabbit, I made a huge “hole” on both the back and the head of the toy and took out 40% of the foam.


Then I hid the circuits inside the toy. I placed one RFID from the back and one from the head (the USB of the Arduino comes all the way through the toy’s tummy) so that when the toy is facing children, children may interact with it at its right eye and its tummy. I had to be really careful because the space was so limited and I was also afraid of taking my circuit apart.


I succeeded finally!! The two USB wires all come out from the back of the toy, which could be hidden by covering the rabbit’s shirt.

Below is a rough testing video after I finished 90% of the final fabrication.

Stage 3: Final Decoration

In order to make the whole project more connected, I sewed one smaller RFID reader in a piece of white fabric so that it seems to be like a napkin.

Then I sewed another card-shape RFID reader into another piece of fabric to make it look more like a sheet. The left photo is actually how the “sheet” looks like in the front, and the right photo is how it looks in the back.

In terms of the way of sewing… I actually sewed a pocket for the RFID reader card. The photo below shows how the pocket looks like on the side (but honestly, it’s not quite intuitive though).

I also sewed the head and the back after placing the circuits inside of the toy (as you may see in the photo below).

This is how my sleepy rabbit looks like after all the fabrication and decoration.


I feel that sound choice is really important meanwhile hard to make a good one. Although I spent plenty of time searching for the most appropriate crying audio, I still feel that the crying sound is kind of too creepy for children to play with (I guess…). The current crying audio in this project is actually a cartoonish one, which I consider the best in comparison to the rest… However, if the sound file is not good enough, it might change the whole initial concept of my project.

import processing.serial.*;
import processing.sound.*;

Serial myPort;
String inBuffer = " ";

SoundFile soundfile1; 
SoundFile soundfile2; 

void setup() {
  size(512, 512);

  //myPort = new Serial(this, Serial.list()[3], 9600);
  myPort = new Serial(this, Serial.list()[4], 9600);

  soundfile1 = new SoundFile(this, "cry_cartoon.wav");
  soundfile2 = new SoundFile(this, "snoring_baby.wav");

void draw() {
  if (inBuffer != " ") {

void serialEvent(Serial myPort) {
  while (myPort.available() > 13) {
    inBuffer = myPort.readString();   
    inBuffer = inBuffer.trim();
    if (inBuffer != null) {

void drawTag(String tagVal) {

  if (tagVal.equals("18002700C9F6")) {
    fill(255, 0, 0);
    ellipse(width/2, height/2, 100, 100);
    // stop crying with the blue snap
  if (tagVal.equals("0300B4A34155")) {
    fill(255, 255, 0);
    ellipse(width/2, height/2, 100, 100);
    // start snoring;
  inBuffer = " ";

Kinetic Interfaces: Final Project — Skeleton (Phyllis)

Date: May 12th, 2018

Title: Skeleton

Description: In our final project, we aim at exploring people’s reaction and interaction with themselves — their appearance as skeletons. We not only provide users with a funny interaction that can never be possible to achieve in real life (seeing yourself being taken apart by yourself) but also a chance for them to immerse in accepting themselves as skeletons, making them feel that “something that should be dead can actually stay alive.”



We found the skeleton image below on google. All credits reserved for VectorStock.

Then we edited the image in photoshop to get each individual body part image, including head, neck, upper body, lower body, big arms, forearms, hands, thigh, shank, and feet.

According to the joints data received from Kinect, we placed each individual image in the relative position.

We first placed each body part image, so that you can actually see a moving skeleton on the screen rather than seeing your initial body image. Below is a testing video of our first step.

Initially, we didn’t think about storing each image as well as all its parameters in one object. However, when we were trying to add more parameters to each image to create the effect of taking your body apart, we encountered so many errors. For instance, they were shaking randomly and crazily on the screen which was out of our control. Thanks to Moon’s help, we finally created 16 objects in total to solve this problem, and each of them has its own scale value, rotation, velocity, etc. As you may see in the testing video below, everything seems to be in our control.


Use minim for sound loading!! We used processing soundfile first, however, we found that the background music sounds really wired in that it had a faster frequency and couldn’t loop. Thanks to Billy’s help, we used minim to load our bgm, and it turned out to work very well.


import processing.sound.*;
SoundFile soundfile;

import oscP5.*;
import netP5.*;

PImage stagelight;

BodyPart[] bones = new BodyPart[16];
int skeletonAge = 0;

PImage foot1, foot2, shank1, shank2, thigh1, thigh2;
PImage forearm1, forearm2, bigarm1, bigarm2, hand1, hand2;
PImage bodyUpper, bodyDown, head, neck;

PVector headR;
PVector neckR;
PVector shoulder1R, shoulder2R;
PVector elbow1R, elbow2R;
PVector wrist1R, wrist2R;
PVector hand1R, hand2R;
PVector spineBase, spineMid, spineShoulder;
PVector hip1R, hip2R;
PVector knee1R, knee2R;
PVector ankle1R, ankle2R;
PVector foot1R, foot2R;

float Attractionamount = 0;
int floorLevel = 300;

void setup() {
  //size(1920, 1080, P3D);
  background(0, 0, 0);
  soundfile = new SoundFile(this, "bgm.mp3");



  bones[0] = new BodyPart("head", head);
  bones[1] = new BodyPart("bodyUpper", bodyUpper);
  bones[2] = new BodyPart("hand1", hand1);
  bones[3] = new BodyPart("hand2", hand2);
  bones[4] = new BodyPart("foot1", foot1);
  bones[5] = new BodyPart("foot2", foot2);
  bones[6] = new BodyPart("bigarm1", bigarm1);
  bones[7] = new BodyPart("bigarm2", bigarm2);
  bones[8] = new BodyPart("forearm1", forearm1);
  bones[9] = new BodyPart("forearm2", forearm2);
  bones[10] = new BodyPart("thigh1", thigh1);
  bones[11] = new BodyPart("thigh2", thigh2);
  bones[12] = new BodyPart("shank1", shank1);
  bones[13] = new BodyPart("shank2", shank2);
  bones[14] = new BodyPart("neck", neck);
  bones[15] = new BodyPart("bodyDown", bodyDown);
  int index = int( random(bones.length) );
  bones[index].isDetached = false;



void draw() {
  rect(0, height-floorLevel, width, floorLevel);

  // updateKinect
  ArrayList<KSkeleton> skeletonArray =  kinect.getSkeletonColorMap();
  //individual JOINTS
  for (int i = 0; i < skeletonArray.size(); i++) {

    KSkeleton skeleton = (KSkeleton) skeletonArray.get(i);
    if (skeleton.isTracked()) {
      joints = skeleton.getJoints();

      color col  = skeleton.getIndexColor();
      drawBody( joints );

  // get the body parts
  if (skeletonArray.size() == 0) {
    skeletonAge = 0;
    for (int i=0; i<bones.length; i++) {
      bones[i].isDetached = false;
      bones[i].vel = new PVector();
  } else {

    PVector vector, pos;

    // ***** head
    vector = PVector.sub(headR, neckR);
    pos = new PVector(neckR.x, neckR.y);
    bones[0].updateFromSkeleton( vector, pos );

    //// ***** bodyUpper
    vector = PVector.sub(spineMid, spineShoulder);
    pos = new PVector(spineShoulder.x, spineShoulder.y);
    bones[1].updateFromSkeleton( vector, pos );

    // ***** hand Left
    vector = PVector.sub(hand1R, wrist1R);
    pos = new PVector(wrist1R.x, wrist1R.y);
    bones[2].updateFromSkeleton( vector, pos );

    // ***** hand Right
    vector = PVector.sub(hand2R, wrist2R);
    pos = new PVector(wrist2R.x, wrist2R.y);
    bones[3].updateFromSkeleton( vector, pos );

    // ***** foot Left
    vector = PVector.sub(foot1R, ankle1R);
    pos = new PVector(ankle1R.x, ankle1R.y);
    bones[4].updateFromSkeleton( vector, pos );

    // ***** foot Right
    vector = PVector.sub(foot2R, ankle2R);
    pos = new PVector(ankle2R.x, ankle2R.y);
    bones[5].updateFromSkeleton( vector, pos );

    // ***** bigarm Left
    vector = PVector.sub(elbow1R, shoulder1R);
    pos = new PVector(shoulder1R.x, shoulder1R.y);
    bones[6].updateFromSkeleton( vector, pos );

    // ***** bigarm Right
    vector = PVector.sub(elbow2R, shoulder2R);
    pos = new PVector(shoulder2R.x, shoulder2R.y);
    bones[7].updateFromSkeleton( vector, pos );

    // ***** forearm Left
    vector = PVector.sub(wrist1R, elbow1R);
    pos = new PVector(elbow1R.x, elbow1R.y);
    bones[8].updateFromSkeleton( vector, pos );

    // ***** forearm Right
    vector = PVector.sub(wrist2R, elbow2R);
    pos = new PVector(elbow2R.x, elbow2R.y);
    bones[9].updateFromSkeleton( vector, pos );

    // ***** thigh Left
    vector = new PVector(knee1R.x - hip1R.x, knee1R.y - hip1R.y);
    pos = new PVector(hip1R.x, hip1R.y);
    bones[10].updateFromSkeleton( vector, pos );

    // ***** thigh Right
    vector = new PVector(knee2R.x - hip2R.x, knee2R.y - hip2R.y);
    pos = new PVector(hip2R.x, hip2R.y);
    bones[11].updateFromSkeleton( vector, pos );

    // ***** shank Left
    vector = PVector.sub(ankle1R, knee1R);
    pos = new PVector(knee1R.x, knee1R.y);
    bones[12].updateFromSkeleton( vector, pos );

    // ***** shank Right
    vector = PVector.sub(ankle2R, knee2R);
    pos = new PVector(knee2R.x, knee2R.y);
    bones[13].updateFromSkeleton( vector, pos );

    // ****** Neck
    vector = PVector.sub(spineShoulder, neckR);
    pos = new PVector(neckR.x, neckR.y);
    bones[14].updateFromSkeleton( vector, pos );

    // *** BodyDown
    vector = PVector.sub(spineBase, spineMid);
    pos = new PVector(spineMid.x, spineMid.y);
    bones[15].updateFromSkeleton( vector, pos );

    // display
    for (int i=0; i<bones.length; i++) {
      BodyPart b = bones[i];

      PVector gravity = new PVector(0, 0.45);

      b.applyForce( gravity);

      if (skeletonAge++ > 1600) {

  text( skeletonAge, 10, 20 );

void loadImg() {
  foot1 = loadImage("foot1.png");
  foot2 = loadImage("foot2.png");
  shank1 = loadImage("shank1.png");
  shank2 = loadImage("shank2.png");
  thigh1 = loadImage("thigh1.png");
  thigh2 = loadImage("thigh2.png");
  forearm1 = loadImage("forearm1.png");
  forearm2 = loadImage("forearm2.png");
  bigarm1 = loadImage("bigarm1.png");
  bigarm2 = loadImage("bigarm2.png");
  hand1 = loadImage("hand1.png");
  hand2 = loadImage("hand2.png");
  bodyUpper = loadImage("bodyUpper.png");
  head = loadImage("head.png");
  neck = loadImage("neck.png");
  bodyDown = loadImage("bodyDown.png");

  stagelight = loadImage("light.png");

void updateVectorsFromSkeleton() {
  headR = new PVector ( joints[KinectPV2.JointType_Head].getX(), joints[KinectPV2.JointType_Head].getY() );
  neckR = new PVector ( joints[KinectPV2.JointType_Neck].getX(), joints[KinectPV2.JointType_Neck].getY());
  shoulder1R = new PVector ( joints[KinectPV2.JointType_ShoulderLeft].getX(), joints[KinectPV2.JointType_ShoulderLeft].getY() );
  shoulder2R = new PVector ( joints[KinectPV2.JointType_ShoulderRight].getX(), joints[KinectPV2.JointType_ShoulderRight].getY() );
  elbow1R = new PVector ( joints[KinectPV2.JointType_ElbowLeft].getX(), joints[KinectPV2.JointType_ElbowLeft].getY() );
  elbow2R = new PVector ( joints[KinectPV2.JointType_ElbowRight].getX(), joints[KinectPV2.JointType_ElbowRight].getY() );
  wrist1R = new PVector ( joints[KinectPV2.JointType_WristLeft].getX(), joints[KinectPV2.JointType_WristLeft].getY() );
  wrist2R = new PVector ( joints[KinectPV2.JointType_WristRight].getX(), joints[KinectPV2.JointType_WristRight].getY() );
  hand1R = new PVector ( joints[KinectPV2.JointType_HandLeft].getX(), joints[KinectPV2.JointType_HandLeft].getY() );
  hand2R = new PVector ( joints[KinectPV2.JointType_HandRight].getX(), joints[KinectPV2.JointType_HandRight].getY() );
  spineBase = new PVector ( joints[KinectPV2.JointType_SpineBase].getX(), joints[KinectPV2.JointType_SpineBase].getY() );
  spineMid = new PVector  ( joints[KinectPV2.JointType_SpineMid].getX(), joints[KinectPV2.JointType_SpineMid].getY() );
  spineShoulder = new PVector  ( joints[KinectPV2.JointType_SpineShoulder].getX(), joints[KinectPV2.JointType_SpineShoulder].getY() );
  hip1R = new PVector ( joints[KinectPV2.JointType_HipLeft].getX(), joints[KinectPV2.JointType_HipLeft].getY() );
  hip2R = new PVector ( joints[KinectPV2.JointType_HipRight].getX(), joints[KinectPV2.JointType_HipRight].getY() );
  knee1R = new PVector ( joints[KinectPV2.JointType_KneeLeft].getX(), joints[KinectPV2.JointType_KneeLeft].getY() );
  knee2R = new PVector ( joints[KinectPV2.JointType_KneeRight].getX(), joints[KinectPV2.JointType_KneeRight].getY() );
  ankle1R =  new PVector ( joints[KinectPV2.JointType_AnkleLeft].getX(), joints[KinectPV2.JointType_AnkleLeft].getY() );
  ankle2R = new PVector ( joints[KinectPV2.JointType_AnkleRight].getX(), joints[KinectPV2.JointType_AnkleRight].getY() );
  foot1R = new PVector ( joints[KinectPV2.JointType_FootLeft].getX(), joints[KinectPV2.JointType_FootLeft].getY() );
  foot2R = new PVector ( joints[KinectPV2.JointType_FootRight].getX(), joints[KinectPV2.JointType_FootRight].getY() );

class BodyPart {
  String name;
  PImage img;
  PVector pos;
  PVector vel;
  PVector acc;
  float angle;
  float scale;
  float distance;
  boolean isDetached;
  PVector prePos;

  BodyPart(String _name, PImage _img) {
    name = _name;
    img = _img;
    pos = new PVector();
    prePos = new PVector();
    vel = new PVector();
    acc = new PVector();
    angle = 0;
    scale = 1.0;
    distance = 0;
    isDetached = false;

  void checkAcceleration() {

    PVector vector = PVector.sub(pos, prePos);
    if (vector.mag() > 70) {
      println(name, vector.mag());
      isDetached = true;

      applyForce( vector );

      fill(255, 0, 0, 100);
      ellipse(pos.x, pos.y, 100, 100);

  void findBones(PVector skeletonVector) {
    if (isDetached) {  

      PVector vector = skeletonVector.copy().sub(pos);
      float distance = vector.mag();
      if (distance <= 20) {
        isDetached = false;
      } else{
        //isDetached = false;
      line(skeletonVector.x, skeletonVector.y, pos.x, pos.y);

  void getPreviousPos() {
    prePos.x = pos.x;
    prePos.y = pos.y;

  void updatePhysics() {
    if ( isDetached ) {
  void applyForce( PVector force ) {
    if ( isDetached ) {
      PVector f = force.copy();
      acc.add( f );

  void updateFromSkeleton( PVector vector, PVector _pos ) {
    if ( !isDetached ) {
      pos = _pos.copy();
      PVector v = vector.copy();
      distance = v.mag();
      angle = v.heading();

  void display() {

    float adjustX = 0;
    float adjustY = 0;
    float adjustW = 1.0;
    float adjustH = 1.0;

    if ( name.equals("head") ) {
      adjustW = 1.5;
      adjustH = 1.5;
    } else if ( name.equals("bodyDown" ) ) {
      adjustY = -20;
      adjustH = 1.2;
    } else if ( name.equals("hand1") || name.equals("hand2") || name.equals("foot1") || name.equals("foot2")) {
      adjustW = 2.5;
      adjustH = 2.5;
    } else if (name.equals("bodyUpper" )) {
      adjustY = 0;
      adjustH = 1.2;
      adjustW = 1.5;

    scale = distance / img.height;


    translate(pos.x, pos.y);
    rotate(angle - PI/2);
    image(img, -0.5*img.width * adjustW * scale + adjustX * scale, adjustY * scale, 
      img.width*scale * adjustW, img.height * scale * adjustH );



    //translate(pos.x, pos.y);

    //stroke(0, 255, 0);
    //line(0, 0, distance, 0);


  void applyRestitution(float amount) {
    float value = 1.0 + amount;
    vel.mult( value );

  void checkBoundary() {
    if (pos.x < 0) {
      pos.x = 0;
      vel.x *= -1;
    } else if (pos.x > width) {
      pos.x = width;
      vel.x *= -1;
    if (pos.y < 0) {
      pos.y = 0;
      vel.y *= -1;
    } else if (pos.y > height - floorLevel) {
      pos.y = height - floorLevel;
      vel.y *= -1;

import KinectPV2.KJoint;
import KinectPV2.*;

KinectPV2 kinect;
KJoint[] joints;

void setupKinect() {
  kinect = new KinectPV2(this);



void drawBody(KJoint[] joints) {
  drawBone(joints, KinectPV2.JointType_Head, KinectPV2.JointType_Neck);
  drawBone(joints, KinectPV2.JointType_Neck, KinectPV2.JointType_SpineShoulder);
  drawBone(joints, KinectPV2.JointType_SpineShoulder, KinectPV2.JointType_SpineMid);
  drawBone(joints, KinectPV2.JointType_SpineMid, KinectPV2.JointType_SpineBase);
  drawBone(joints, KinectPV2.JointType_SpineShoulder, KinectPV2.JointType_ShoulderRight);
  drawBone(joints, KinectPV2.JointType_SpineShoulder, KinectPV2.JointType_ShoulderLeft);
  drawBone(joints, KinectPV2.JointType_SpineBase, KinectPV2.JointType_HipRight);
  drawBone(joints, KinectPV2.JointType_SpineBase, KinectPV2.JointType_HipLeft);

  // Right Arm
  drawBone(joints, KinectPV2.JointType_ShoulderRight, KinectPV2.JointType_ElbowRight);
  drawBone(joints, KinectPV2.JointType_ElbowRight, KinectPV2.JointType_WristRight);
  drawBone(joints, KinectPV2.JointType_WristRight, KinectPV2.JointType_HandRight);
  drawBone(joints, KinectPV2.JointType_HandRight, KinectPV2.JointType_HandTipRight);
  drawBone(joints, KinectPV2.JointType_WristRight, KinectPV2.JointType_ThumbRight);

  // Left Arm
  drawBone(joints, KinectPV2.JointType_ShoulderLeft, KinectPV2.JointType_ElbowLeft);
  drawBone(joints, KinectPV2.JointType_ElbowLeft, KinectPV2.JointType_WristLeft);
  drawBone(joints, KinectPV2.JointType_WristLeft, KinectPV2.JointType_HandLeft);
  drawBone(joints, KinectPV2.JointType_HandLeft, KinectPV2.JointType_HandTipLeft);
  drawBone(joints, KinectPV2.JointType_WristLeft, KinectPV2.JointType_ThumbLeft);

  // Right Leg
  drawBone(joints, KinectPV2.JointType_HipRight, KinectPV2.JointType_KneeRight);
  drawBone(joints, KinectPV2.JointType_KneeRight, KinectPV2.JointType_AnkleRight);
  drawBone(joints, KinectPV2.JointType_AnkleRight, KinectPV2.JointType_FootRight);

  // Left Leg
  drawBone(joints, KinectPV2.JointType_HipLeft, KinectPV2.JointType_KneeLeft);
  drawBone(joints, KinectPV2.JointType_KneeLeft, KinectPV2.JointType_AnkleLeft);
  drawBone(joints, KinectPV2.JointType_AnkleLeft, KinectPV2.JointType_FootLeft);

  drawJoint(joints, KinectPV2.JointType_HandTipLeft);
  drawJoint(joints, KinectPV2.JointType_HandTipRight);
  drawJoint(joints, KinectPV2.JointType_FootLeft);
  drawJoint(joints, KinectPV2.JointType_FootRight);

  drawJoint(joints, KinectPV2.JointType_ThumbLeft);
  drawJoint(joints, KinectPV2.JointType_ThumbRight);

  drawJoint(joints, KinectPV2.JointType_Head);

//draw joint
void drawJoint(KJoint[] joints, int jointType) {
  translate(joints[jointType].getX(), joints[jointType].getY(), joints[jointType].getZ());
  ellipse(0, 0, 25, 25);

//draw bone
void drawBone(KJoint[] joints, int jointType1, int jointType2) {
  translate(joints[jointType1].getX(), joints[jointType1].getY(), joints[jointType1].getZ());
  ellipse(0, 0, 25, 25);
  line(joints[jointType1].getX(), joints[jointType1].getY(), joints[jointType1].getZ(), joints[jointType2].getX(), joints[jointType2].getY(), joints[jointType2].getZ());

//draw hand state
void drawHandState(KJoint joint) {
  translate(joint.getX(), joint.getY(), joint.getZ());
  ellipse(0, 0, 70, 70);

Different hand state
//void handState(int handState) {
//  switch(handState) {
//  case KinectPV2.HandState_Open:
//    fill(0, 255, 0);
//    break;
//  case KinectPV2.HandState_Closed:
//    fill(255, 0, 0);
//    break;
//  case KinectPV2.HandState_Lasso:
//    fill(0, 0, 255);
//    break;
//  case KinectPV2.HandState_NotTracked:
//    fill(255, 255, 255);
//    break;
//  }

“?” — Net Art Project (Moon)

Title: “?”

Collaborators: Phyllis, Jack

Inspiration: As I was searching “net art” related websites on google, I found a super cool one called Colossal, which contains the gif below that led me to think about what to make for this project. The more time I spent staring at the movement of the particles in the gif, the more I love it, the stronger my sense of “chaos” and “order,” “attraction” and “repulsion” is. Then I started to think about what I could extend on top of those keywords in order to guide users to think more rather than just “surfing” a website.

Continue reading

"use strict";

var particles = [];

var maxInterval = 500;
var interval = maxInterval;

var isInteraction = false;

function setup() {
	createCanvas(windowWidth, windowHeight);

	for (var i=0; i<800; i++) {
			new Particle( random(-windowWidth, windowWidth), random(-windowHeight, windowHeight) )

  mouseX = width/2;
  mouseY = height/2;
  //setTimeout(link, 20000);

function draw() {
  stroke(255, 50);

  translate(width/2, height/2);

  for (var i=0; i<particles.length; i++) {
	 particles[i].attractedTo( createVector(mouseX-width/2,mouseY-height/2) );


	// fill(255);
	// text(particles.length, 10, 20);

  // interval things  
  // text(interval, 10, 40);
  if (isInteraction) {
  if (interval == 0) {


function mouseMoved() {
  isInteraction = true;
function updateInterval() {
  if (interval > 0) {
  } else {
    interval = maxInterval; // reset the value again

function link() {'../transitions/infinite.html', "_self");


function windowResized() {
  resizeCanvas(windowWidth, windowHeight);

class Particle {
  constructor( x, y ) {
  	this.pos = createVector(x, y);
  	this.vel = createVector();
  	this.acc = createVector();
  	this.mass = random(1, 10);
  	this.rad = 1 * this.mass;

  	this.angle = createVector();
  	this.rotSpeed = createVector( random(-0.1, 0.1), random(-0.1, 0.1) );

  update() {
    // position
    this.vel.add( this.acc );
    this.pos.add( this.vel );
    this.acc.mult( 0 );

    // angle
    this.angle.add( this.rotSpeed );

  applyForce( force ) {
  	var f = createVector();
  	f = force.copy();
  	f.div( this.mass );
  	this.acc.add( f );

  attractedTo( target ) {
  	var vector = p5.Vector.sub(target, this.pos);
  	var distance = vector.mag();

  	if (distance > 430 ) {
      // pull the articles to the center
    } else {
      // push back

  repel( target ) {
  	var vector = p5.Vector.sub(target, this.pos);
  	var distance = vector.mag();
  	if (distance < 10) {

  applyRestitution( amount ) {
  	var value = 1.0 + amount;

  checkEdges() {
    if (this.pos.x < -windowWidth) {
    	this.pos.x = -windowWidth;
    	this.vel.x *= -1;
    } else if (this.pos.x > windowWidth) {
    	this.pos.x = -windowWidth;
    	this.vel.x *= -1;

    if (this.pos.y < -windowHeight) {
    	this.pos.y = -windowHeight;
    	this.vel.y *= -1;
    } else if (this.pos.y > windowHeight) {
    	this.pos.y = -windowHeight;
    	this.vel.y *= -1;

display() {
	translate(this.pos.x, this.pos.y);

  ellipse( 0, 0, random(1, 5), random(1, 5) );


Kinetic Interfaces: Final Project Concept (Phyllis)

Humans are so used to what they look like as “humans” — we have muscles, fat, skin, hair etc. — layers and layers wrapping our bones. I believe such appearance makes us feel alive, with animal heat, with endless energy. But what if there is a contradiction between what we expect ourselves to be like and what the appearance we actually receive by mirroring? What if such appearance makes you somehow feel that it is you while it is not really you, you are alive while you are not really alive?

We consider the appearance of skeleton the best choice to create such feelings in humans. How do you feel if you see yourself appear as a skeleton when you are in fact expecting to see your actual human body? How do you feel when seeing your bones dropping on the ground when you’re moving that part of your body really fast?

In our final project, we aim at exploring people’s reaction and interaction with themselves — their appearance as skeletons. We not only provide users with a funny interaction that can never be possible to achieve in real life (seeing yourself being taken apart by yourself) but also a chance for them to immerse in accepting themselves as skeletons, making them feel that “something that should be dead can actually stay alive.”

Talking Fabrics: Group Project

Title: Guide Shirt

Collaborators: Diego, Dania, Leah, Phyllis

Documented by: Phyllis

Materials: LDR*2, Lilypad*1, motor*2, 3.7 V battery*1, wires, T-shirt*1.

Inspiration: “Trust games” have always been popular among friends, so our group agreed on Diego’s idea of designing a new form of trust game on fabrics (which in our case is a shirt).

How It Works

Our project requires a cooperation between two people — A and B. A wears the shirt while B has a torchlight with him/her, and B acts as A’s “guide.” By pointing the torchlight to the left (right) LDR sensor, the motor on the left (right) will vibrate, meaning A should turn left (right). When both motors are vibrating, then A should go straight. All components are built on the shirt (expect the torchlight of course).


To start with, we first drew our circuit on paper to get a clearer idea of building the actual one. Since the components are duplicated in our project, as you may see in the picture below, we only drew half of the entire circuit.

Then we moved the whole circuit onto the Arduino.

To test if this circuit was working, we connected the motors to the Arduino in the simplest way that is shown below.

As you may see from the video below, since it was working well, we decided to move the whole circuit to the lilypad.

Similar to what we had done when building the circuit on the Arduino, Dania and I took turns trying to figure out the circuit on the lilypad. We tried more than three times drawing it on paper, and below is what we considered the best well-explained version.

Then Dania and I took turns sewing the whole circuit onto a piece of fabric. We used conductive thread as the connection between the lilypad/the resisters and the wires (as is shown in the pictures below). Since there are many branches in our circuit, in order to avoid short circuit, we used strong wires as the “trunks” of each branch and made hooks on both sides so that it would be easier to sew the branches with conductive thread. Since conductive thread is kind of hard to be stay in shape, I also twisted the softer wires and twined them onto the stronger ones to make better connections.

Twisting the wires and making hooks on both sides.

After moving the whole circuit onto the lilypad (as is shown below in the left picture), I sewed the wires on the fabric with normal thread to make the whole circuit on the fabric more solid (as is shown below in the right picture) and move the fabric onto our shirt.


Initially, we planned to place one motor on each sleeve. However, I realized that the upper parts of the two sleeves (arms) are actually in the air when we are wearing T-shirts, which means the two sleeves actually cannot directly touch our skin. Therefore, in order to improve the functionality of our project, we agreed on moving them to the back, over the LDRs. Diego and Leah finished sewing the motors on the shirt. In case the circuit works, Dania and I took a testing video first before moving on.

It is working well. Dania and I then started placing the LDRs.


Since we changed the placement of the LDR, I cut the extra length and placed the LDRS on the shirt under the motors, first using tapes to stabilize them.

After finding the most appropriate position, I used the little scissors (used for cutting threads) to make a hole on each side of the shirt so that the LDRs would appear and detect light efficiently after flipping the shirt inside out.

Below on the left is how our shirt looks inside, and on the right is how it looks like outside.

Reflection and Further Improvement

We did test whether it worked or not after finished everything, and it worked. Unfortunately, we forgot to take a final working video which is so stupid of us… Therefore, it turned out that when we were presenting to our class, one of the motors was off and one of the LDRs was broken.

For further improvement, first, we would definitely remember to take testing videos throughout our process so that we will have something as a backup to show our peers. We did do testings throughout the process, however, taking testing videos while testing is even more important!!

Second, to make it wireless, we will definitely have the 3.7V battery on our circuit so that we don’t have to connect our circuit to the computer.

Third, in terms of the whole design, especially the circuit design, we would spend more time to make it neater, so that it won’t look so confusing and sort of chaotic inside our shirt (but we did try our best to make it as neat as possible!).

int sensorPin1 = A1;
int sensorPin2 = A2;

int sensorValue1 = 0;
int sensorValue2 = 0;

const int motorPin1 = 6;
const int motorPin2 = 13;

void setup() {
  pinMode(motorPin1, OUTPUT);
  pinMode(motorPin2, OUTPUT);

void loop() {
  sensorValue1 = analogRead(sensorPin1);
  sensorValue2 = analogRead(sensorPin2);

  //get sensor value
  Serial.print("sensorValue1: ");
  Serial.print("sensorValue2: ");

//turn on motor 1
  if (sensorValue1 >= 1023) {
    digitalWrite(motorPin1, HIGH);
  } else {
    digitalWrite(motorPin1, LOW);

//turn on motor 2
  if (sensorValue2 > 800) {
    digitalWrite(motorPin2, HIGH);
  } else {
    digitalWrite(motorPin2, LOW);

Kinetic Interfaces: Simple Sketch with OSC Communication (Phyllis)

In this assignment, ellipses with different sizes and colors will be generated randomly when you move your mouse. When drawing in the “sender” sketch,  you can see that the “receiver” sketch and the “sender” sketch are in sync. That is, you will find that same patterns are generated.

According to your mouse position, the “receiver” sketch will get your mouse position (x-axis and y-axis) and map them into percentage values (x/width, y/height) in real time.

Below is the demo of my really simple sketch.

Week 12: Internet Art Project Proposal (Moon)

Group: Phyllis & Jack



Key points

Bigger planets or smaller stars?

Each particle represents one story? One kind of interaction?

Interaction should guide users to think more about being flexible, creative, incorporating interaction with the keyboard?

Each planet has one terrain: sand, water, fog/cloud, mountain?

Week 12: Response to Rachel Greene (Moon)

After reading Rachel Greene’s article, I find it really interesting to imagine what the Internet was like before it became what it is today. As Greene said, “the Internet allowed net.artists to work and talk independently of any bureaucracy or art-world institution without being marginalized or deprived of community.” That is to say, the internet as a platform is so different and outstanding that it enables artists to talk, discuss and work not only independently but also form a community of net art without being marginalized. It gives access and creative control to the passionate rather than the powerful. When artists have control of their own work, it greatly opens up the kind of work they can do and consequently expands what people can be exposed to.

I personally had less knowledge about the Internet before, holding the stereotype that it was about nothing but coding, which is as lifeless as programming itself, with no creativity. However, now I realize that the spirit behind the web can actually be “lively and gregarious.” The web can also act as some sort of mainstream media that provides people with great enjoyment. Rather than considering “net” as a form of expression of coding, we can think of it as some sort of communication tool or even a creation of a high-tech century. The Internet is no longer equal to coding, instead, coding serves for the Internet. The Internet provides people an alternative to communication, and coding thus is brought to life.

Kinetic Interfaces: Kinect Interaction (Phyllis)

I created a simple interaction with Kinect by making a comparison the center position and the width of the sketch.

When the center position is between 1/3 width and 2/3 width, the body within that area will change into a colorful one.When part of you are out of that area, you will get the depth image of that specific part which is outside of the boundary and the rest parts of your body remain colored.Here is a demo of my assignment.

import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import gab.opencv.*;
import controlP5.*;

OpenCV opencv;
ControlP5 cp5;

Kinect2 kinect2;
PImage depthImg;
PImage colorImg;

int thresholdMin = 0;
int thresholdMax = 4499;
int depth;

float avgX = 0;
float avgY = 0;

void setup() {
  size(512, 424, P2D);

  kinect2 = new Kinect2(this);

  // Blank image
  depthImg = new PImage(kinect2.depthWidth, kinect2.depthHeight, ARGB);
  colorImg = new PImage(kinect2.depthWidth, kinect2.depthHeight, ARGB);

  // Blank OpenCV Image
  opencv = new OpenCV(this, depthImg);

  // add gui
  int sliderW = 100;
  int sliderH = 20;
  cp5 = new ControlP5( this );
    .setPosition(10, 40)
    .setSize(sliderW, sliderH)
    .setRange(1, 4499)
    .setPosition(10, 70)
    .setSize(sliderW, sliderH)
    .setRange(1, 4499)

void draw() {

  int[] rawDepth = kinect2.getRawDepth();

  // 1. process the raw depth
  for (int i=0; i < rawDepth.length; i++) {
    depth = rawDepth[i];

    if (depth >= thresholdMin
      && depth <= thresholdMax
      && depth != 0) {

      // depthImage for tracking
      float w = map(depth, thresholdMin, thresholdMax, 255, 100);
      depthImg.pixels[i] = color(w);

      // colorImage to just show
      if (avgX > width/3 && avgX < 2*width/3) {
        float r = map(depth, thresholdMin, thresholdMax, 255, 0);
        float b = map(depth, thresholdMin, thresholdMax, 0, 255);
        colorImg.pixels[i] = color(r, 0, b);
      // ok
    } else {
      // transparent
      depthImg.pixels[i] = color(0, 0);
      colorImg.pixels[i] = color(0, 0);

  // 2. raw depth --> openCV --> improving the sensing quality
  opencv.threshold(50); // to make the image binary

  // 3. get the center position
  float sumX = 0;
  float sumY = 0;
  int count = 0;

  PImage img = opencv.getSnapshot();

  for (int y = 0; y < img.height; y++) {
    for (int x = 0; x < img.width; x++) {
      int index = x + y * img.width;
      float w = red(img.pixels[index]);

      if (w > 100) {
        sumX += x;
        sumY += y;

  // get the center position
  if (count > 0) {
    avgX = sumX / count;
    avgY = sumY / count;
  //image(img, 0, 0);
  image(kinect2.getDepthImage(), 0, 0);
  image(colorImg, 0, 0);

  // draw the center position
  stroke(0, 255, 0);
  line(avgX, 0, avgX, height);
  line(0, avgY, width, avgY);

  text(frameRate, 10, 20);

Week 11: Response to “Computers, Pencils, and Brushes” (Moon)

Paul Rand explores the relationship between digital programs and creativity in “Computers, Pencils, and Brushes,” holding the opinion that the use of computer produces “a language of technology rather than “a language of art.” I find Rand holds the opposite opinion in comparison with Graham to some degree. I remember Rand saying, “concepts and ideas spring from the mind and not the machine.” Yes, I agree with his point here. Without a knowledge of design, the computer is more than useless, let alone producing anything that is as creative as an art piece. However, I don’t agree with Rand completely in a sense that the computer actually can create various art forms. Especially in the modern age of technology in which high-tech is more prevalent than ever, so many art installations are increasingly combining artistic ideas with the use of the computer. Film production, graphic design, user interface design, 3D modeling… endless creativity consuming art forms are having tighter and tighter connections with the computer. Therefore, in response to Rand, I think that we have to try our best to get familiar with more computer skills and meanwhile keep thinking — keep our eyes open and be aware of every detail in life — to force ourselves to be creative.