Digital Farm Final – PlantMail

For my final project I originally planned to extend my midterm, which was focused on a completely closed system for growing plants. One small feature I had planned was for the plant to be able to message you and tell you to add more water or food or anything to the system when that was necessary. Once I started working on this feature however, I found it to be more interesting than the rest of the system. While my original idea was to explore how little user interaction I could rely on, this instead focused on pushing the interaction to 11, to the point that your plant was now sending you email messages and complaining about you on its twitter account.

The first step was to figure out which API’s and services I would be using. I focused mainly on email and twitter in this project (more social media could easily be added as well, but for the sake of time and proof of concept I started with these) and decided on working with Mailgun for the email service and tweepy, a set of python wrappers for the twitter API which I had used in past projects. These were pretty easy to set up and I quickly had some mock code to trigger the sending of the email and tweets.

The next step was to figure out how to make Python and Processing talk to each other. To do so, I set up a mock server on python and then had processing act as a client to that server. Even though they were both run locally, this seemed to be the quickest and simplest option. With a bit of basic socket programming, the two talked to each other just fine.

The final major step was to figure out how to gauge the plant’s levels. I used arduino sensors and decided to monitor 3 levels: UV, temperature, and moisture. The big issue here was how unreliable temperature and UV were when taken a snapshot of. To remedy this, I set those values to take the average over the observation period. For presentation purposes the observation intervals were lowered to a minute to give a quicker example, however in actual use, it should have at least an hour to get an accurate reading of the conditions.

With each individual part in place, I was ready to put it together. The following is a basic architecture diagram of the process. PlantMail Architecture Map (1)

Overall I am pleased with the project. If I continue it in the future there are a few issues to remedy. First, I want a more extensive vocabulary for the plant. The results are currently a set of three, so options for speech are limited. I would like to either make this a more sliding scale or add more options to each outcome and have it choose randomly. As it stands, it doesn’t have to feel of giving the plant a personality when it simply goes through if else statements and has only 9 speech possibilities. The other issue is that of outliers. When taking the average of reading with arduino, since I average it over the observation time period, the room could technically be freezing cold for half and hour then burning hot for half an hour and it would report as just fine. I should include a log that reports any individually extreme results.

 

// python code

import socket
import requests
import tweepy

consumer_key = "jXUSVWtNT2qkcjL80Gvlmy7FQ"
consumer_secret = "gRdUvWRUJs1kFSNSul0i8i3ZLhigN3t5lKMhs7qyOJ5h0XF2eS"

access_token = "831141872202502144-00boHhZVKmk1xO4ipunsm3VTfggPGpE"
access_secret = "c1nnaNb5bEx4LtjBKAmctlHShZGaqbovbpHLgtLLiXxRL"

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token,access_secret)

api = tweepy.API(auth)             

curTemp = -1
curUV = -1
curMoist = -1

def send_email(text):
    print("email sent")
    return requests.post(
        "https://api.mailgun.net/v3/sandbox1f65f03f51814fc18ab6231dd72fbde0.mailgun.org/messages",
        auth=("api", "key-36dd7247883ea5bb74c12c83ec18829c"),
        data={"from": "Mailgun Sandbox <postmaster@sandbox1f65f03f51814fc18ab6231dd72fbde0.mailgun.org>",
              "to": "Sam Arellano <msa455@nyu.edu>",
              "subject": "Hello Sam Arellano",
              "text": text})
    
    
def form_message(moist,temp):
    tempMessage = ""
    if moist == 2:
        tempMessage += "I'm drowning over here. "
    elif moist == 0:
        tempMessage += "I could use a bit of water, I'm parched. "
    
    if temp == 2:
        tempMessage += "It's burning up in here. Turn on the ac or something! "
    elif temp == 0:
        tempMessage += "Could you grab me a blanket or something? I'm freezing. "
    
    if temp == 1 and moist == 1:
        tempMessage = "Honestly, I can't complain. I'm feeling great"
    
    return tempMessage

def form_email(moist,temp):
    tempMessage = ""
    if moist == 0:
        tempMessage = "Hey man, I could really use some water. I'm feeling parched."
    elif moist == 2:
        tempMessage = "I'm kinda drowning over here, mind draining me out a bit?"
    elif temp == 0:
        tempMessage = "Mind turning on the heater? I'm about to become an ice cube."
    elif temp == 2:
        tempMessage = "I'm about to fry, turn on the ac or something!!"
    return tempMessage
#store your api key below
key = "key-36dd7247883ea5bb74c12c83ec18829c"
#designate your host and port info
#the blank host info designates it will accept any host
host = ''
port = 5555



#determine if this is first connection for the client
firstConnect = True

#create socket object s
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#attempt to bind the socket object to the designated host and port
#if there is a failure, print the error
try:
    s.bind((host,port))
except socket.error as e:
    print(str(e))
#listen for the processing client to connect to the server
s.listen(1)
print("waiting for a connection...")


while True:    
    conn, addr = s.accept()
    #on initiation notify the client they connected
    if(firstConnect):
        conn.send(str.encode("Server and client connected"))
        print("Server and client connected")
        firstConnect = False
    while True:           
        #notify server side of connection info
        print('connected to: ' + addr[0]+":"+str(addr[1]))
        #create a new thread for server client communication

        data = conn.recv(2048)
        message = (data.decode("utf-8"))
        print(message)
        statuses = message.split(",")
        curUV = int(statuses[0])
        curMoist = int(statuses[1])
        curTemp = int(statuses[2])
            
        newUpdate = form_message(curMoist, curTemp)   
        print(newUpdate)
        try:
            api.update_status(status = newUpdate)
            print("tweeted")
        except:
            pass
        
        if(curMoist == 0 or curTemp == 0 or curTemp == 2):
            email = form_email(curMoist, curTemp)
            print(email)
            send_email(email)
        reply = ("Server: " + message)
   
        print(curUV)
        print(curMoist)
        print(curTemp)

        if not data:
            s.close()
            break

        
// processing code
import processing.net.*; 
import processing.serial.*;
import processing.video.*;
//initiate the client, capture  and serial objects
Client myClient;
Serial myPort;

int curMessage;
int curTemp;
int curUV;
int curMoist;


String messageToSend;
int newMessage;
//this string will be used as a sort of buffer for processing to designate
//where it stored the file to
int newIssue = 0;
void setup() { 
  size(640, 480); 
  myPort = new Serial(this, Serial.list()[0],9600);
  
  //initialize the client on home ip with same port as python server 
  //(the port is arbitrary, just has to be the same as the server)
  myClient = new Client(this, "127.0.0.1", 5555); 
  
  //setup communication to arduino
  //myPort = new Serial(this, "COM6", 9600);
  //setup webcam to take frames from stream
}

void draw() { 
  //read the camera stream and display that in the window
//  image(cam,0,0);

  //check the client is active. Then check if the string value has been changed
  //if the string value has been changed, send that value to the python server
  //then reset the string value to empty
  while(myPort.available() > 0){
    curMessage = myPort.read();
    checkMessage(curMessage);
    print("n");
    newIssue = 1;
    messageToSend = str(curUV) + "," + str(curMoist) + "," + str(curTemp);
  }
  //return server output after it has processed the file
  //*TO DO*: handle dropped client connections 
  if (myClient.active()) {
    if (newIssue == 1) {
      myClient.write(messageToSend);
      newIssue = 0;
    }  
    String cur = myClient.readString();
    if (cur != null) {
      println(cur);
    }
  }
//  println(frameCount);
} 

void mouseClicked() {
  if(mouseX > width/2 && mouseY > height/2){
    myClient.stop();
  }
}

void checkMessage(int message){
  if(message == 'a'){
    curUV = 2;
    print("High UV");
  }
  else if(message == 'b'){
    curUV = 1;
    print("Medium UV");
  }
  else if(message == 'c'){
    curUV = 0;
    print("Low UV");
  }
  else if(message == 'd'){
    curMoist = 2;
    print("High Moisture");
  }
  else if(message == 'e'){
    curMoist = 1;
    print("Medium Moisture");
  }
  else if(message == 'f'){
    curMoist = 0;
    print("Low Moisture");
  }
  else if(message == 'g'){
    curTemp = 2;
    print("High Temperature");
  }
  else if(message == 'h'){
    curTemp = 1;
    print("Medium Temperature");
  }
  else if(message == 'i'){
    curTemp = 0;
    print("Low Temperature");
  }
}

//arduino code

int uvSensorPin = A0;
int uvSensorValue;
int moistSensorPin = A1;
int moistSensorValue;
int tempPin = A2;
int tempValue;
int curMessage = 0;
int moistHigh = 400;
int moistMed = 25;
long uvHigh = 75000;
long uvMed = 40000;
int tempHigh = 30;
int tempMed = 20;

void setup(){
  Serial.begin(9600);
  pinMode(uvSensorPin, INPUT);
  pinMode(moistSensorPin,INPUT);
  pinMode(tempPin, INPUT);
}

void loop(){
  long uvSum = 0;
  long moistSum = 0;
  long tempSum = 0;
  
  for(int i = 0; i<1024;i++){
    uvSensorValue = analogRead(uvSensorPin);
    uvSum = uvSensorValue + uvSum;
    moistSensorValue = analogRead(moistSensorPin);
    moistSum = moistSensorValue + moistSum;
    tempValue = analogRead(tempPin);
    tempSum = tempValue + tempSum;
    delay(2);
  }

  long uvMean = uvSum / 1024;
  long moistMean = moistSum / 1024;
  long tempMean = tempSum / 1024;
  float cel = ((tempMean/1024.0)*5000)/10;
  float uv = (uvMean*1000/4.3-83)-21;

  //Serial.print("Current UV index is: ");
  //Serial.print((uvMean*1000/4.3-83)-21);
  //Serial.print(",");
  //Serial.print("n");
  //Serial.print("Current moisture level is: ");
  //Serial.print(moistMean);
  //Serial.print(",");
  //Serial.print("n");
  //Serial.print("Current temperature is: ");
  //Serial.print(cel);
  //Serial.print("n");
  
  if(uv > uvHigh){
    Serial.write('a');
  }
  else if(uv < uvHigh && uv > uvMed){
    Serial.write('b');
  }
  else{
    Serial.write('c');
  }

  if(moistMean > moistHigh){
    Serial.write('d');
  }
  else if(moistMean < moistHigh && moistMean > moistMed){
    Serial.write('e');
  }
  else{
    Serial.write('f');
  }

  if(cel > tempHigh){
    Serial.write('g');
  }
  else if(cel < tempHigh && cel > tempMed){
    Serial.write('h');
  }
  else{
    Serial.write('i');
  }
  
  //Serial.write((uvMean*1000/4.3-83)-21);
  //Serial.write(",");
  //Serial.write(moistMean);
  //Serial.write(",");
  //Serial.write(cel);
  
  delay(120000);
}

Interaction Lab Final – Mirror

Sam Arellano

Partner: Daniela Oh

Professor: Antonius

For our final project, we decided to extend our midterm which was a lockbox that used facial recognition to determine that you were the owner and unlock accordingly. The main goals we set out with in mind when starting this were to improve the aesthetics of the project, to create a more fluid user experience, and to squash a lot of the bugs that ended up in the final version of our midterm. The project we decided to make was a replica of the mirror from snow white, which would be able to tell the user whether they were the fairest of them all. Using facial recognition, it would be able to tell if they were Daniela, and if not, they obviously weren’t the fairest of them all.

As far as aesthetics go, I think it went well. Daniela was in charge of creating the actual mirror itself. We weren’t able to find an actual double sided mirror online, but were able to purchase stickers that attached to both sides of a piece of transparent plastic and gave it the double sided mirror effect. Once we put the backing on the mirror, this effect really stood out and worked well. Daniela then created the housing for the mirror by making a box for the back of the mirror to store the webcam and other components in. She also made a plate for the front of the mirror with the inscription “mirror mirror on the wall, who’s the fairest of them all”. These parts were laser cut and then laser etched and turned out very well.

18596555_10213204114598382_2099541850_o (1)

I was in charge of the actual code for the project. For the most part I decided to build on top of my existing work from the midterm project. The infrastructure for the python server and processing client were already in place, and the facial recognition code was the most stable part of the whole project. The new features I had to include were the voice recognition user to “wake the mirror up”, along with figuring out how to make a more seamless user experience.

When it came to incorporating voice recognition, I began by attempting to use a few different services, such as google cloud voice, sphinx, the microsoft api, and some others. In the end, I found a simple python library that included useful wrappers for these services. I ended up going with the google cloud voice service as through various testing methods I found it to be the most reliable and correct. Other services often messed up recognition of the words “mirror mirror” especially by people with accents. As these were the critical words to recognize, I couldn’t compromise on that accuracy even for the sake of speed. Once I had my service chosen, I incorporated it into the previous project. On beginning to run the code, I made the server wait for someone to speak the words “mirror mirror” to “turn on”. As the code had to be up and running to recognize user speech, this was obviously just a state change, but the user didn’t have to know that. This also became one of the biggest hurdles. As the code had to be up and running it was extremely sensitive to background noise. It was difficult for the program to differentiate between the speech it should collect and what it shouldn’t. It was also bad at segmenting speech, so it would wait until the user was done talking to save the clip and send it to the server for verification, even though the only important words were the first two. This is mainly because I used an API so all the heavy lifting was done server side, there was nothing locally to detect these issues. If I move forward with this project I would have to incorporate some more fixes locally to deal with these issues, and maybe just use machine learning to do the entire voice recognition locally.

In our midterm project, easily the biggest issue was the user experience. After taking a picture of your face (which sometimes yielded an error for too many faces, or no faces at all) the user had to just wait for about 20 to 40 seconds for the result, and then all that happened was the turning of a servo. We wanted to make a more fluid user experience, especially since the mirror was an actual character many people knew of, it made sense to attempt to make interacting with it like interacting with a person. The big step there came with incorporating voice lines for the mirror to say at certain stages. There were 3 main stages: wake up, search, and result, which could then end in two results (Daniela or not Daniela). With the help of my roommate Colton Paul, we recorded various lines for each stage. I took those voice clips into audacity to pitch them down and add a little reverb to try to get closer to the sound of the original mirror. Once they had the effect I desired, we included them at the corresponding stages. I made a random variable at each stage to pull a random voice clip from the proper array to give a more fluid feel to the mirror and prevent him from just going through a script of 3 lines each time. I also had to change the processing client to a state machine in order to block incoming user input as people really like pressing buttons and trying to break processes. At this point the entire program was much more robust. Aside from errors that the API’s I was using threw at me, the actual connection between the Python and Processing worked rather well. The final touch was to add arduino code to turn the LED’s on the mirror on and off whenever the mirror was speaking.

At this point it was time for user testing, and here an old problem reared it ugly head. The google voice api was chosen for its accuracy, but I did have to sacrifice speed for that accuracy, and it was pretty slow. It was throwing off the entire user experience for the person to just have to sit and wait 15 to 30 seconds for the mirror to decide if the user said “mirror mirror” or not. Considering out entire original goal was seamless user interaction, this was a wrench in our plans. I couldn’t really put any voice lines there to distract from the loading process like I was able to with the facial recognition loading since this was the command to wake the mirror up, logically it wouldnt make sense for the mirror to talk to you and then yawn and wake up asking why you interrupted it. Without a good alternative, I decide to cut the voice recognition from the final project to be shown off, as ambient noise was just as big of an issue still and that would become even more of problem in a noisy area.

Overall, I feel the project turned out well. I am proud of how it looks and want to keep it in my house as actual decor even without the program running. If I were to work on it more I would want to figure out a more viable method for voice recognition (which makes me really interested in how the Alexa can do it so flawlessly, I’ll probably look into that) of small speech particles. The other major step is to put it all on a raspberry pi or to include a wifi shield and do all the code server side. I’d like this to be free from the computer and act like an actual mirror.  This was definitely the project I’ve done that most focused on user interaction over features and capability and that’s something I’d like to focus on more in the future.

 

 

//python code

import socket
from _thread import *
import kairos_face
import os
import speech_recognition as sr

r = sr.Recognizer()

completed = False

#store your api key and id below
kairos_face.settings.app_id = '50d0c8e2'
kairos_face.settings.app_key = '1517e15dc89de6b9a27de5bad83afe78'
#designate your host and port info
#the blank host info designates it will accept any host
host = ''
port = 5555
noted = False

#determine if this is first connection for the client
firstConnect = True

#create socket object s
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#attempt to bind the socket object to the designated host and port
#if there is a failure, print the error
try:
    s.bind((host,port))
except socket.error as e:
    print(str(e))
#listen for the processing client to connect to the server
s.listen(1)
print("waiting for a connection...")


def checkGoogle(audio):
    try:
        result = r.recognize_google(audio)
        print("google speech thinks you said " + result)
        return result
    except sr.UnknownValueError:
        print("google speech could not understand the audio")
        return("unknownError")
    except sr.RequestError as e:
        print("Could not request from google speech service; {0}".format(e))
        return("requestError")
   
#use the kairos api to take the file designated by the client and 
#compare it to the existing gallery in place
#return the verification value to denote similarity of the faces

def verify(filename,subject,gallery):
    recognized = kairos_face.verify_face(file=filename, 
                                           subject_id=subject,
                                           gallery_name=gallery)
    #print(recognized)
    confidence = recognized['images'][0]['transaction']['confidence']
    print(confidence)
    if(confidence > .75):
        return("1")
    else:
        return("0")


#function to handle the client and server interactions
def threaded_client(conn):
    #constantly check for information from the client side
    try: 
        while True:
            data = conn.recv(2048)
            message = (data.decode("utf-8"))
            #reply = ("Server: " + message)
            print(message)
            verificationVal = str(verify(message, "daniela", "gallery1"))
            if not data:
                break
            conn.sendall(str.encode(verificationVal))
        
    finally:
        conn.close()

while True:    
    conn, addr = s.accept()
    #on initiation notify the client they connected
    if(firstConnect):
        #Use voice recognition to activate the mirror and move on to receiving 
        #picture files from the client.
        #This is currently disabled due to the slow retrieval time of the API
        #and the issues it has with processing background noise
        #while (completed == False):
        #print("wake up the mirror")
        #with sr.Microphone() as source:
        #    tempAudio = r.listen(source)
       
        #    result = checkGoogle(tempAudio)  
        #    if "mirror mirror" in result:
        #        completed = True
        #        print("mirror initiated")
        conn.send(str.encode("Server and client connected"))
        firstConnect = False
    while True:      
        #notify server side of connection info
        if(noted == False):
            print('connected to: ' + addr[0]+":"+str(addr[1]))
            noted = True
        data = conn.recv(2048)
        message = (data.decode("utf-8"))
        #reply = ("Server: " + message)
        print(message) 
        verificationVal = (verify(message, "daniela", "gallery1"))
        if not data:
            break
        conn.sendall(str.encode(verificationVal))


// processing code

//import a bunch of libraries
import processing.net.*; 
import processing.serial.*;
import processing.video.*;
import gab.opencv.*;
import java.awt.Rectangle;
import processing.sound.*;

//instantiate a bunch of soundfiles
SoundFile correctFace1;
SoundFile incorrectFace1;
SoundFile incorrectFace2;
SoundFile incorrectFace3;
SoundFile incorrectFace4;
SoundFile incorrectFace5;
SoundFile searching;
SoundFile wakeUp1;
SoundFile wakeUp2;

SoundFile[] incorrects = new SoundFile[5];
SoundFile[] wakeups = new SoundFile[2];

float r;
int r2;

//initiate the client, capture  and serial objects
OpenCV opencv;
Client myClient;
Capture cam;
Serial myPort;

//this file acts as a buffer for storing temporary pictures
String file = "C:UsersMain CharacterDesktopmirrorpicturestemppic.jpg";
//newMessage indicates the client has received a new message from the server
int newMessage;
int pictureFlag;
//this indicates there is a new picture file to send to the server
int newFile = 0;
//this is used to indicate there is a message to send to the server
int message = 0;
//this keeps track of the current state of client, this is done to make sure
//processes aren't overloaded by people sending multiple pictures or wakeup requests
int state = 0;


void setup() { 
  size(640, 480); 
  //import all sound files
  correctFace1 = new SoundFile(this, "correctFace1.wav");
  incorrectFace1 = new SoundFile(this, "incorrectFace1.wav");
  incorrects[0] = incorrectFace1;
  incorrectFace2 = new SoundFile(this, "incorrectFace2.wav");
  incorrects[1] = incorrectFace2;
  incorrectFace3 = new SoundFile(this, "incorrectFace3.wav");
  incorrects[2] = incorrectFace3;
  incorrectFace4 = new SoundFile(this, "incorrectFace4.wav");
  incorrects[3] = incorrectFace4;
  incorrectFace5 = new SoundFile(this, "samFace.wav");
  incorrects[4] = incorrectFace5;
  searching = new SoundFile(this, "searching.wav");
  wakeUp1 = new SoundFile(this, "wakeUp1.wav");
  wakeups[0] = wakeUp1;
  wakeUp2 = new SoundFile(this, "wakeUp2.wav");
  wakeups[1] = wakeUp2;
  
  myPort = new Serial(this, Serial.list()[0],9600);
  //initialize the client on home ip with same port as python server 
  //(the port is arbitrary, just has to be the same as the server)
  myClient = new Client(this, "127.0.0.1", 5555); 
  //setup communication to arduino
  //setup webcam to take frames from stream
  String[] cameras = Capture.list();
  if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  } else {
    for (int i = 0; i < cameras.length; i++) {
      println(cameras[i]);
    }
    cam = new Capture(this, "Logitech Webcam C930e,size=640x480,fps=30");
    cam.start(); 
    cam.read();
  }
} 

void draw() { 
  //read the camera stream and display that in the window
  if (cam.available() == true) {
    cam.read();
    //Originally I tried to add a feature to check for user face being present before
    //sending the picture to the server, but openCV wasn't working on windows
    //opencv = new OpenCV(this,"test.jpg");
    //opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
    //faces = opencv.detect();

  }
//  image(cam,0,0);
  set(0, 0, cam);

  //check the client is active. Then check if the string value has been changed
  //if the string value has been changed, send that value to the python server
  //then reset the string value to empty

  //return server output after it has processed the file
  //*TO DO*: handle dropped client connections 
  if (myClient.active()) {
    if (message == 1) {
      myClient.write("state1");
      message = 0;
    }  
    if(message == 2){
      myClient.write(file);
      message = 0;
    }
    String cur = myClient.readString();
    
    if (cur != null) {
      newMessage = 1;
      if(int(cur) == 1){
        pictureFlag = 1;
      }
      else{
        pictureFlag = 0;
      }

      println(cur);
    }
  }

  if (newMessage == 1) {
    if (pictureFlag == 1) {

      myPort.write('w');
      correctFace1.play();
      
      println("Welcome Daniela");
    } else {
      r = random(1,5);
      r2 = int(r);
      myPort.write('w');
      incorrects[r2].play();
      println("You are not Daniela");
      //if(myPort.available())
    }
    state = 0;
    newMessage = 0;
    pictureFlag = 0;
  }
} 

void mouseClicked() {
  //used to force take picture and send to server
  if(state==1){
    state = 2;
    cam.stop();
    saveFrame(file);
    //newFile = 1;
    cam.start();
    message = 2;
    myPort.write('q');
    searching.play();
    delay(3000);
  }
}

void keyPressed(){
  //used to force state change
  if(key == 'w'){
    if(state==0){
      state=1;
      r = random(1,3);
      r2 = int(r);
      myPort.write('e');
      wakeUp1.play();
      delay(4000);
    }
  }
}

//arduino code

int ledPin = 12;
int val;

void setup() {
  // put your setup code here, to run once:
  Serial.begin(9600);
  pinMode(ledPin, OUTPUT);

}
void loop() {
  // put your main code here, to run repeatedly:
  while(Serial.available()){
    val = Serial.read();
  }
  if(val == 'q'){
    lightLEDs(50);
  }
  if(val == 'w'){
    lightLEDs(50);
  }
  if(val == 'e'){
    lightLEDs(50);
  }
  

}

void lightLEDs(int n){
  for(int i = 0; i < n; i++){
    digitalWrite(ledPin,HIGH);
    delay(100);
    digitalWrite(ledPin,LOW);
    delay(100);  
  }
  val = 'a';
}

Lab 12 Media Controller

Sam Arellano

Professor Antonius

Date: 5/5/17

Partner: Daniela Oh

 

For this lab we had to make some sort of media controller. We decided to work with controlling the speed of a video like we saw in class, and thought that the potentiometer would be a natural fit for controlling it.

The wiring for this was pretty simple, as we just wired up a potentiometer to be read with the arduino.

18339469_1483077861744976_1202860095_o

Once it was hooked up we needed to work with serial communication to get the potentiometer value from the arduino to processing. We weren’t super sure on how to do that since we forgot, but thankfully Professor Antonius helped us with the basics of it. Once that was up and running, we were able to easily control the speed the video was playing at.

If we worked more on this we would like to insert a simple mapping function to be able to play the video at many different speeds instead of just 2x, .5x and 1x like it is now. Making the value discrete wouldn’t be too difficult, we would just divide it by a constant as it’s received by processing.

 

 

//arduino code

int val;
int pot;
int valPot = 2; //potentiometer
int LED = 13;
void setup() {
  // put your setup code here, to run once:
  pinMode(LED, OUTPUT);
  Serial.begin(9600);
}

void loop() {
  // put your main code here, to run repeatedly
  pot = analogRead(valPot)/4;
  while (Serial.available()){
    val = Serial.read();
  }
  if (pot > 150){
    Serial.write('w');
  }
  else if (pot <= 150 && pot !=0){
    Serial.write('s');
  }
  else if (pot == 0){
    Serial.write('n');
    }
}

//processing code

import processing.serial.*;
import processing.video.*;
Movie myMovie;
Serial myPort;
int message; 
float vidSpeed = 1;

void setup() {
  size(500,500);
  myPort = new Serial(this, Serial.list()[2], 9600);
  frameRate(30);
  myMovie = new Movie(this, "pandas.mp4");
  myMovie.loop();
}

void draw() {
  myMovie.speed(vidSpeed);
  while ( myPort.available() > 0) {  // If data is available,
    message = myPort.read();      // read it and store it in val
    println(message);
  }
  if (myMovie.available()) {
    myMovie.read();
  }
  image(myMovie, 0, 0);
  if (message == 119) {
    vidSpeed = 2;
  } else if (message == 115) {
    vidSpeed = 0.5;
  } else if (message == 110){
    vidSpeed = 1;
  }
}

Final Project Proposal

Sam Arellano and Daniela Oh

Instructor: Antonius

 

Interaction is the set of interfaces between two or more entities, whether those be humans working with each other, humans utilizing machines, or even multiple machines. While a broad topic, the particulars of interaction are often the last parts of an equation to be realized. Take the internet and web development, a field I’m well versed in, while much development time has been focused on making new and cool technologies feasible, it feels like only recently

is long-due respect paid to proper UI’s and in-depth analysis of the processes of user interfaces. Yet, just like the change to GUIs and the point and click mouse revolutionized computers, interfaces and interaction in general are the often overlooked component to a good user experience. Developers often focus on the new cool things they can include, yet all that is often necessary is a solid idea with a smooth and intuitive execution. Slack is nothing new, it’s just IRC with some sleek GUI’s. In short, interaction is a very important, yet often overlooked topic.

 

For our final project, we would like to continue utilizing the technologies we worked with in our midterm and polish up some of the rough edges it had. Once again we will be utilizing facial recognition, but instead of a simple lockbox, we want something with a bit more interactivity. Taking inspiration from disney princess tales, we want to make a mirror on the wall to tell exactly who is the fairest of them all. We want to program this mirror to a single (or a few) user and have it tell them they are the prettiest, but it must also make sure not to tell others who aren’t its owner. And of course since it’s the mirror on the wall, you need to talk to it to activate it.

 

As such, we will utilize the same facial recognition stack we used in the last project, consisting of a Processing client, and Python server making API calls and calculations to return to the client. This will check a newly taken picture from the webcam against the stored library of pictures of the user and return a verification value, determining whether the person is the user or not. We will also need to include verification for voice activation. Assuming we can get a speech recognition program up and running, the user will need to say “mirror mirror on the wall”, but if time constraints prove to be an issue, the starting mechanism might just be noise of any kind, not necessarily specific speech.

 

The issues we’d like to remedy from our midterm are user interaction and project aesthetics.

To fix the issue of user interaction, we would like to include a variety of stock voice lines for the mirror to say to the user. Everything from “i can’t see you straight dear” if there is no face in frame, or “hm…let me see if you are truly the fairest” when fetching the api call. One annoying part of the midterm was the awkward sit and wait while the server made an api call to compare faces. In this way we would like to hide our “loading screens” and make the interactions more seamless and fluid. When it comes to improving aesthetics, the mirror itself won’t have much to modify. We would like to include good looking casing however and hide all the wires in there. We want this to look as much like a normal mirror as possible, and hopefully a pretty mirror.

Lab 11 Motor Drawing

Sam Arellano

Professor Antonius

4/28/17

Partner: Daniela Oh

For this lab we worked with stepper motors. We were tasked with creating a drawing machine by connecting 2 stepper motors to attachable arms that would hold a pen. This overlapping formation allowed a originally 360 degree mechanism to get all the areas in a rectangle (with varying accuracy).

First we wired up the circuit to connect the stepper motor to the computer with the H-bridge. This was pretty simple as we just followed the schematic.

wiring

We used arduino code from the examples library to get the motor going clockwise and counterclockwise on a delay. Here it is in action.

It was pretty funny to watch flail around. We then added a potentiometer to be able to control the speed of the motor. Once again we used stock code from the examples library.

This allowed us to easily change the speed of the stepper motor. Finally, we met up with Mark and Sjur from another group and put out stepper motor contraptions together. By pinning them together, we were able to more easily (thought still not accurately) control the location of the pen between them. Here it is when it all came together.

Overall this was a rather fun lab, and surprisingly simple to put together. When seeing it in class I had no idea how we would make that, but in practice it wasn’t too difficult.

#include <Stepper.h>

const int stepsPerRevolution = 200;  // change this to fit the number of steps per revolution
// for your motor


// initialize the stepper library on pins 8 through 11:
Stepper myStepper(stepsPerRevolution, 8, 9, 10, 11);

int stepCount = 0;  // number of steps the motor has taken

void setup() {
  // nothing to do inside the setup
}

void loop() {
  // read the sensor value:
  int sensorReading = analogRead(A0);
  // map it to a range from 0 to 100:
  int motorSpeed = map(sensorReading, 0, 1023, 0, 100);
  // set the motor speed:
  if (motorSpeed > 0) {
    myStepper.setSpeed(motorSpeed);
    // step 1/100 of a revolution:
    myStepper.step(stepsPerRevolution / 100);
  }
}

Lab 10 3D Modeling

For this lab we focused on using TinkerCAD for 3D modelling. We had to make some sort of either wearable technology, game controller, or a security device. I decided to make a game controller as that seemed the most interesting.

I originally made up a few shapes and just put them together, but then I realized this would be the housing for actual components, otherwise I would just have a pretty looking brick once it was actually printed out. To fix that, I created a smaller part of the main body and overlapped it inside of the main cube then turned it into a hole. This allowed me to easily hollow out the casing for the controller to later put a circuit board inside. I turned the buttons into holes as well so once I print it I can put actual buttons inside to interact with the circuit board, along with space for the joystick to lay into the casing and connect with a larger board. I finally included a hole in the back of the casing to allow the wiring to run out the back. This was the finished model. viewview2

Grass Seed Experiment Lab

For this lab we worked with grass seed and seeing what it could grow on. Some people worked with things like fabrics, cardboard and foam to get the grass seed to grow in the pores. I decided to use nylons and see if I could get a vaguely animal shape by filling them with soil and partitioning it with rubber bands.

soil sock

I cut off a nylon to basically a sock and then filled it with soil. After that I spread grass seed through out the top.

rubber banded soil sock

Sadly, it turned out that my balloon animal skills were not up to par. There goes my backup plan of travelling clown. The animal I ended up making was a caterpillar, just one step up from a snake. After the grass grows out I plan to add features like googly eyes, legs, and antennae. All that’s left is to put it in a drainage container and keep it watered until the grass grows in a few days.

watered sock

Digital Farm Midterm Project: Capsule Plant

For this midterm we had to come up with a project based on one of two topics, either a general art piece or sculpture or an air purification device. As I have next to no creative skills I decided to make an air purification device. However I didn’t just want to make a plant that I would use, optimally I would be able to use this as a prototype for something marketable. Due to this idea I started thinking about what people would want out of a plant. I know many of many friends have the opposite of a green thumb and honestly just forget they have a plant half the time. While many other projects have been based around reminding users of their plant, I wanted to take a different route and just not care if the user remember.

With this idea of less interactivity in mind, I wanted to push it to the limit. What if there was no user interaction at all? What if it was just a jar you bought at the store, took home, put in the corner, and completely forgot about it while it cleans the air in your space? This would be based on those plants in glass jars that are basically entirely enclosed.

My original thought for the blueprint would be a large plastic container with multiple levels. The bottom level would be the reservoir. The level above would have the exposed roots of the plant. The level above that would hold the plant. The pump would rest in the bottom and pump water from the reservoir to the root level quicker than it would drain back to the reservoir to fill up the level. I got a plastic jug and then made incisions. My plan was to slot plexiglass into the incisions to make the layers of the container. The professor cut out plexiglass circles for my layers and I drilled through them to make drainage holes.

test1

I then learned a very important lesson. Plexiglass sucks to work with. When I drilled some of the time it split, the cut out weren’t perfectly even, the slotting idea left a bunch of room in the sides of the plastic for water to leak through, it just wasn’t precise enough for a prototype. I then decided to just put a smaller container in a larger container instead of some separate level idea.

test2

This worked much better, no major leaks, the incisions in the inner container let the water drain slow enough that the container would actually fill, and all around just easier to work with. I put the pump in the bottom and then glue gunned the two containers together. I placed rock wool in the inner container and added the plant (also apparently rock wool feels like asbestos and really hurt my hands). I replaced the top of the container and cut out holes in the sides for a fan to push air through. Because I wanted an enclosed plant I had to give it at least some circulation to properly clean the air.

plant

With the actual plant container put together and stable I then had to put the arduino together. Thankfully this was all pretty simple wiring we had already done in class. I put the pump on a cycle of every 10 minutes to wet the roots of the plant and then used a relay to connect the motor fan and make it activate at the same time.

wiring

Overall, the project turned out pretty fine. If I was to go further with it I would get a more suitable container or plant, the plant was a bit cramped up. I’d also make sure the arduino is completely inside the jar to preserve the idea of everything being encapsulated. If I really went further with it I’d find a way to make the whole thing glass just because it would look cooler and possibly add fish to the bottom and make a closed ecosystem.

final

Midterm Project: Facial Recognition Lockbox

Partner: Daniela Oh

Instructor: Antonius

For this project coming up with an idea was pretty difficult. We weren’t sure exactly what we wanted to do at first, but eventually we decided on something to do with facial recognition as we had found some cool API’s to use with it. We decided to make a simple lockbox with arduino that would only open when the owner of the box showed their face in front of it.

My personal role was on getting the facial recognition part working while Daniela worked on the arduino part of it and the actual hardware components like making the box and putting all the devices on it accordingly. My first thought was to make a basic architecture chart to figure out exactly how to get this whole thing running.

Architecture 1

Simple, but architecture charts help me figure out my exact workflow and know what each part is supposed to independently accomplish. Broken down, there were a few major steps, take the picture of the person, send that picture to the API to compare, return the similarity value received, and open the servo or keep it closed depending on the similarity value. Once I started fleshing out the steps, a lot of problems emerged, primarily, the processing libraries I found for facial recognition were either too slow or too weak. Most of them also just used machine learning principles and therefore depended on the strength of your machine to calculate all the values from scratch locally. I knew it would be quicker and more reliable to make a call to an established API from a facial recognition service and get the results that way. I’ve used Kairos before in python projects so I knew they were dependable and relatively quick (as long as you aren’t an idiot and try to update the existing gallery on their database every time you run the code instead of just once and then wonder why your code has such an abysmal run time…because…I totally didn’t do that).

This choice, while better for receiving reliable results, opened a whole new can of worms. How do I get processing and python to play nice together?? I looked into how a processing sketch can run a python script, but then there was either an issue with supplying and argument to the script or taking the output. I tried figuring out a way for processing to execute a batch command and do it all itself, but that didn’t go anywhere (there were some tips on it, but they were way over my head). I then thought of using socket programming to get them to talk to each other. I knew a little bit about setting up basic servers in python, so I decided to try that and see if I could get decent results. I kept trying slightly different things for each individual part and eventually, my architecture scheme ended up looking like this.

Networking Architecture Schematic

From processing to python back to processing and then to arduino. Overall, the final project lived up to what I wanted. Here’s the box that Daniela ended up putting together.

17670389_10212700029716575_517963405_o 17668835_10212700029676574_1534524846_o

The servo acts as a lock to prevent someone from opening it until the processing sketch gets the verification from the python server. Here it is in action.

So…it works. But that’s not to say it works to my satisfaction. If I had more time with the project, there are a lot more kinks to work out. I’ll go through some of them.

I need a good way to check the user closes the box and is done with it to automatically lock the box. I also need a better way to take the picture than with mouseclick. I want to install an arduino button for both of these tasks, but the pushbutton in my kit didn’t work out with the current design. On another hardware note, man the box is ugly. We’d like to make it much more aesthetically pleasing if we were to move further with it.

The biggest issue though? The server. I got a bare bones server up and running and did basically no error handling throughout it. Client disconnects? Server goes down. Client sends and error message? Server goes down. API returns an error? Server goes down. It is very finicky and I need to put some failure tolerance into the whole system. Either that or work with another idea I had which was using pipes to make both processing and python just search a directory repeatedly until a txt file with the info normally sent over the server appears. While that may be more failure proof, I think an optimized server setup will be quicker and less memory intensive.

Overall, I’m very proud of this project. Getting this done in a week is something that I’m pretty surprised about. On the surface the project seemed pretty simple, but every turn led to more and more issues so I’m glad we were able to at least get a prototype up and running in the time we had. I learned a lot about networking and working with external processing libraries throughout this project and would like to either extend this project or work on a similar one in the future.

 

UPDATE:

After a successful in class presentation, we had to do demos the next day. In addition to changing the loaded galleries so that the box would recognize my face instead of my partner’s, I wanted to do some bugfixing so that the  whole thing would work more reliably. The biggest issue was the server not properly reacting to errors. I originally didn’t include any error handling simply for time restraint reasons, so I tried to go through and deal with errors a little more elegantly. I realize the big issue was that I made the network system work on multiple threads. This was good for speed, but realistically I’d only ever have one client at a time so it wasn’t an issue. I removed the option of threading and the server would then at least crash properly if it encountered an error (previously I couldn’t even force crash the server, I would have to close the IDE completely and reopen it anytime an error was thrown).

I put in a pretty catchall and primitive error handling loop (try: code, except: pass) and that got rid of most of the issues server side. Now the client and server were properly talking to eachother around 90% of the time. This is where a bigger problem came up. Even after initializing correctly and connecting to all the external hardware, the processing client would sometimes display a blank screen in the processing window (this should have displayed the webcam feed). But the weird thing about it is it wasn’t a reliable issue, it only happened every few times I ran the code. Any time I restarted my computer it would work the first time, but after that it was a tossup. I couldn’t figure that out on my own, but the next day Professor Antonius helped me realize it was an issue with which webcam was being selected. My computer was loading the list of available webcams a bit differently every time, so it was just a tossup whether or not I would connect to the right one. By manually specifying the camera name to use instead of taking the first argument in the array that issue disappeared.

We then had demos for the class and my project worked the majority of the time. If I was to work on it more I would include better user interfacing and squash some final bugs, but for a week long midterm project I’m happy with how it turned out.

 

 

Arduino Servo Code:

#include <Servo.h>
Servo servo;
int servoValue;
int trueFalse; //if it's daniela, true, else false
int value;

void setup() {
  servo.attach(9);
  Serial.begin(9600);
}

void loop() {
  servoValue = analogRead(0); //read save analog value
  while (Serial.available()){
    trueFalse = Serial.read();
  }
  if(trueFalse=='t'){
    servo.write(90);
  }else if(trueFalse=='f'){
    servo.write(182);
  }
  
}

Processing Client and webcam code:

import processing.net.*; 
import processing.serial.*;
import processing.video.*;
//initiate the client, capture  and serial objects
Client myClient;
Serial myPort;
Capture cam;

String file = "C:UsersMain CharacterDesktopfaceRecognitionkairos-face-sdk-python-masterpicturestemppic.jpg";
int newMessage;
int pictureFlag;
//this string will be used as a sort of buffer for processing to designate
//where it stored the file to
int newFile = 0;
void setup() { 
  size(640, 480); 
  //initialize the client on home ip with same port as python server 
  //(the port is arbitrary, just has to be the same as the server)
  myClient = new Client(this, "127.0.0.1", 5555); 
  //setup communication to arduino
  myPort = new Serial(this, Serial.list()[0], 9600);
  //setup webcam to take frames from stream
  String[] cameras = Capture.list();
  if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  } else {
   cam = new Capture(this, cameras[1]);
   cam.start(); 
    }     
  
} 

void draw() { 
  //read the camera stream and display that in the window
  if (cam.available() == true) {
    cam.read();
    set(0,0,cam);
  }
  
  //check the client is active. Then check if the string value has been changed
  //if the string value has been changed, send that value to the python server
  //then reset the string value to empty
  
  //return server output after it has processed the file
  //*TO DO*: handle dropped client connections 
  if(myClient.active()){
    if(newFile == 1){
      myClient.write(file);
      newFile = 0;
    }  
    String cur = myClient.readString();
    if(cur != null){
      newMessage = 1;
      pictureFlag = int(cur);
      println(cur);
    }
  }
  
  if(newMessage == 1){
    if (pictureFlag == 1){
    myPort.write('t');
    println("Welcome Daniela");
    }else{
    println("You are not Daniela");
    myPort.write('f'); 
    }
    newMessage = 0;
    pictureFlag = 0;
  }
  


} 

void mouseClicked(){
  cam.stop();
  saveFrame(file);
  newFile = 1;
  cam.start();
}

Python server code:

import socket
from _thread import *
import kairos_face
import os

#store your api key and id below
kairos_face.settings.app_id = '50d0c8e2'
kairos_face.settings.app_key = '1517e15dc89de6b9a27de5bad83afe78'
#designate your host and port info
#the blank host info designates it will accept any host
host = ''
port = 5555

#determine if this is first connection for the client
firstConnect = True

#create socket object s
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#attempt to bind the socket object to the designated host and port
#if there is a failure, print the error
try:
    s.bind((host,port))
except socket.error as e:
    print(str(e))
#listen for the processing client to connect to the server
s.listen(1)
print("waiting for a connection...")
   
#use the kairos api to take the file designated by the client and 
#compare it to the existing gallery in place
#return the verification value to denote similarity of the faces

def verify(filename,subject,gallery):
    recognized = kairos_face.verify_face(file=filename, 
                                           subject_id=subject,
                                           gallery_name=gallery)
    #print(recognized)
    confidence = recognized['images'][0]['transaction']['confidence']
    print(confidence)
    if(confidence > .85):
        return("1")
    else:
        return("0")


#function to handle the client and server interactions
def threaded_client(conn):
    #constantly check for information from the client side
    while True:
        data = conn.recv(2048)
        message = (data.decode("utf-8"))
        #reply = ("Server: " + message)
        print(message)
        verificationVal = str(verify(message, "daniela", "gallery1"))
        if not data:
            break
        conn.sendall(str.encode(verificationVal))
        
    conn.close()
    
while True:    
    conn, addr = s.accept()
    #on initiation notify the client they connected
    if(firstConnect):
        conn.send(str.encode("Server and client connected"))
        firstConnect = False
    #notify server side of connection info
    print('connected to: ' + addr[0]+":"+str(addr[1]))
    #create a new thread for server client communication
    start_new_thread(threaded_client,(conn,))

Digital Farm Midterm Project Idea

For my midterm project, I drew inspiration from the garden in a jars you sometimes see in stores. They are fully enclosed structures that allow the plant to grow with little to no input from humans aside from topping up the water every now and then. I wanted to create these for the purpose of cleaning the air. I read in one of the assigned readings about how this company was working on making cleaner air spaces in their offices and figuring out exactly how many plants were necessary per human there and then planning accordingly. I thought that had to be a lot of work, it would be a lot easier to buy prepartitioned amounts of plants and just get more if there are more people as opposed to engineering and entire rig before hand. Here is a diagram of my initial blueprint.

17474422_1440064839379612_1191681095_o

The entire project will be encapsulated inside a large water jug. The bottom layer will be the growth solution for the plants. The next layer up will be the growth medium. I’m still not sure on whether I’d like for the roots to be in the air, or in some medium like rockwool or soil. In air would be cleaner and get the solution to the plants quicker, but having soil would give more of a failure buffer in case there is an issue with water the plants. These layer coverings will be painted a dark color to protect from things like algae or overexposure to light. The next layer up will be the plants themselves. On the outside of the jug, there will be a fan and two holes through the jug, this is for the sake of ventilating the air inside the jug, both to control temperature and also to get the new cleaner air into the outside space. There will be a tube at the top connecting to the reservoir that allows for the user to add new growth solution or water to the jug.

The water solution will be fed to the plants on a timed cycle. There will be a pump in the bottom reservoir connected to an arduino that will start every 15 minutes (number subject to change with testing). The rig will use an ebb and flow system, simply filling up the upper chamber with the roots everytime the pump turns on and then allowing it to drain afterwards. When the pump is turned on, the fan will also turn on to circulate the air within the jug.

As for plant choice, I’m still not sure. I’m leaning towards spider plants and peace lilies for the sake of hardiness while still cleaning the air. Like it was mentioned in class its hard to tell exactly how much air is being purified without precise testing, so for now I just want to make sure the plants don’t die.

Overall, my goal is to make a system that requires as little human interaction as possible. I want for the human to be able to stick the entire rig into a corner and only think about it once every week or two to fill up water and be done with it.  I also want simple scalability. Serving the needs of one versus many shouldn’t become more and more difficult to create a large enough rig to house all the plants, it should just simply be getting a few more water jugs.