NIME – Week 1 Project

For this week we had to make an instrument out of a piezoelectric sensor. I had quite a few ideas going into this, but ran into some pretty big roadblocks early on. Regardless of the medium I was trying to conduct sound through, the speaker wasn’t picking up much. The only sound I could get was a crackling when I ran my fingers across the sensor itself, and my plan was to make a music piece resembling the gentle crackling of a camp fire. I then realized that the assignment shouldn’t be this difficult and asked to use Su Han’s sensor instead. The assignment suddenly became a lot simpler as I realized it wasn’t my methods that were incorrect, the sensor itself I was using was broken.

With a working sensor, I was able to experiment with some of the ideas I had. My first idea was to move the sensor on different surfaces and simulate the sound of skateboarding, accompanied by a video of people skateboarding. This worked, but didn’t seem in the vein of the prompt, being create an actual instrument. I played around with some rubber bands and after I liked the effect and how it sounded, I decided to try to make a string instrument. I experimented with a few different surfaces to use, and then wrapped rubber bands around them. This wasn’t working as I expected, but then I realized that the movement of just the rubber bands on a block wasn’t enough to trigger the sensor. So I put the sensor under a sheet of tin foil in order to carry the movements better. This worked very well, my next step was to figure out how to yield specific different sounds from each.

I had a tuner on my phone so it was time to mess around with what exactly made different notes. It seemed like the strength of the rubber band and how much it was stretched determined the exact pitch. I got out my tuner and tried to alter the pitches until I had a basic chord, but as I didnt have a way to change the pitch gradually, this proved dificult as I had to add bits and pieces to change how much the rubber band was stretched then just guess and check. I didn’t end up with my desired notes, but was happy with the results regardless.

NOC Final – Traffic Visualization

For my final project I wanted to focus on a field I’ve learned about recently, traffic visualization. This field is focused on virtualizing and displaying the patterns of automobile (and sometimes other forms of) movement. This felt like a perfect way to utiliize the skills I’ve gained in nature of code to create something that paralleled the way a system worked in the real world.

Furthermore this is closely related to a problem I’ve thought a lot about before: the Google-Maps problem. Basically the idea is that when you use Google Maps, if there is an accident up ahead or traffic congestion, it will tell you of alternate routes. A lot of people use Google Maps, so what if by diverting the traffic to a different location you actually create more bottle neck than you cause? And if this is true, then what is the breakpoint of users for this to become an issue? Basically what percentage of the population needs to be taking directions from Google Maps for this to be a problem? My goal was to create a working simulation that I would be able to feed information from the Google Maps API to simulate similar pathmaking decisions.

I haven’t been able to find any scholarly work on this phenomenon (maybe it doesn’t actually matter as much as I think it does) but for traffic visualization there is two main fields of algorithms: leader-followings and autonomous agents. Leader-following is based on the idea that the majority of traffic occurs with vehicles in a line and there are various rules that occur for each role (whether leader or follower, and at anytime a car can be both) to account for the majority of the time in traffic. The cars are treated more like groups of cars depending on what line of traffic they are in at any given time. The other field is based on the idea of autonomous agents where each individual has more decision points and acts more individually. We went into autonomous agents in class so I wanted to further that work and follow that field of thought for traffic visualization.

As for my game plan of the project, I was aiming to figure out what the essence of cars and maps were and virtualize them accordingly. I figure all I needed to visualize these was an autonomous agent that could interact with its environment and make pathing decisions according to its current environmental factors. I figured this was pretty much the essence of the vehicle as far as my concerns went. The much more interesting part was the map as this was what gave the necessary information to the agent to make its decisions. I intended to have the map create float fields that the autonomous agents would be influenced by.

So once I had a map that had different objects the agents could interact with, such as roads and traffic lights, what was the next step? The ability to systematically generate maps based on existing data. I could always manually create objects at the places where they corresponded to on the map, but wouldn’t it be way cooler if the user could just upload the map and have the cars drive around on any map they gave?

Well herein lied the issue. I chose to go the cool route. The cool route was just so cool I was focused on the map generation over everything else because it was just such a fun problem to solve, a lot more than just making autonomous agents and basic float fields right? Well in focusing so much on implementing the ideal version of map importation, I didn’t properly budget the time I had. Never focus on the reach goals before the fundementals are in place. I ended up having to compromise on the map importation as well to have it simply accept primary colors instead of full fledged image recognition or even color gradient analysis like I was originally planning. Even the basic 4 color recognition is so buggy I couldn’t get it to work consistently. So in the end I wasn’t left with much other than a very basic map generation program and a simple agent that could check containment and change steer direction accordingly.

With all the sad results out of the way, I don’t think this is the end of my project by any means. I just don’t think p5 was a good fit for the map importation I wanted, but I do think it will work well for creating autonomous agents once I have the map (up until I start to use thousands of agents and pathmaking, but Ill cross the bridge to OpenFrameworks when I get there). I will continue working on this project throughout the break and hopefully have some interesting progress by the time I’m done. While I definitely didn’t achieve my goals in the time alotted, I also think I just set unreasonable expectations for myself to complete in a week an a half.

NOC – Oscillation – Sam Arellano

For this week I wanted to work a bit more with the waves that were shown in class. They had perfect potential for becoming a sound visualizer and after all the cool midterm projects some of my classmates did with visualizing sound I wanted to try it out. In the end I mainly ended up linking the amplitude of the sound with the waves. I wanted to do a bit more with FTT but I couldn’t figure out how to convert the arrays it gives to easily modify the values of the waves. I honestly spent way longer on dumb mistakes than I should have, like getting the browser to read from my local files (ended up running a simple python server) or trying to mess around with sound analysis through chrome web api’s instead of just using p5. If I was to work on it more, I would encode more information into the waves, primarily color.

NOC Midterm – Teaching a computer to play pong

For my midterm I wanted to make a smaller version of my final. My final project will be experiments with multiple neural networks training on each other, so to make a decent start, I decided to work on a simpler problem, teaching a computer to play a specific game with a neural network. My original goals were as follows: use a version of pong coded in p5.js to teach a computer to learn how to play pong, this would be done all inside the browser and all in javascript. As it turns out, that was pretty tricky to get done in just a week so I had to make some modifications. First issue? The language. Javascript just doesn’t have the robust support for complicated math and machine learning operations that Python does, and so I went back to using my more trusted language. Second, since I was using Python instead, coding and rendering Pong in pygame is a lot more difficult than just using a great library called OpenAI Gym. This library has builtin functionalities for playing old atari games and storing data that is received from them and has great documentation and API. With this switch being made, I did impose one constraint on myself, the machine learning part had to be done all without anything more than numpy. There are a number of machine learning libraries in Python that make composing simple neural networks a trivial task. I’ve done machine learning before, but very rarely get into the nitty gritty parts of the math and so that’s what I particularly wanted to learn from this project.

As for issues, the big one was time. Not in time of coding, but in time of training. I had to use a virtual machine with ubuntu on it as Open AI Gym wasn’t playing nice with windows. This massively slowed down the training times as I couldn’t fully use my computer’s processing power. Moving forward for my final I should be using some AWS servers to train so that I don’t have to worry about my poor little computer overheating or having a stroke.  With all these explanations for my architectural decisions, lets get into the project.

The basics of a neural network are as follows, a network takes in a set of inputs (usually a matrix of numbers), it then feeds these numbers through one or many layers. Each layer does mathematical operations on the numbers. Finally at the end of the layers, an output is given. This is mostly all done through various matrix multiplication and dot products to change the size of the matrix being passed between the input, the hidden layers, and the output. 

For my project, the input was a series of pixel data as that’s what AI gym returns on each frame. After receiving that input I did some preprocessing on it, downsampling it to reduce computation time (things like reducing resolution by removing every other pixel, gray scaling it to remove color, removing backgrounds) and converting it to a workable matrix of values.

Once I had my input data, I fed that to the hidden layer. I used a single layer network, primarily because it allowed me to cut some corners with my math and to reduce computation time. I’m honestly surprised a single layer was deep enough to properly process this information, but if there’s anything I learned from this project, the more I think I understand how these algorithms work, the more I’m surprised by the exceptions.

The hidden layer, once finished with its computations sends its result as an output. The result I worked with was the probability of moving the paddle up or down. In my initial tests I had it give the actual value 1 or 0 by just rounding the probability to 1 or 0, but by instead taking the probability and throwing a weighted coin it added an extra layer of dynamic randomness which really improved the learning rate. I want to say I understand why it helps…but…I’m honestly not sure, I’ve just read in other machine learning research papers that this approach is often employed.

So every frame of the game worked like this: take a screenshot of the game state, process the screenshot data, feed it through the network, get a probability of going up or down, and then have the computer move the paddle up or down depending on how the coin flip landed. However, this doesn’t have the important part of machine learning: the learning.  How the model grows and learns is by being slightly changed and moved depending on how its doing, so every 50 iterations I performed a technique called backpropagation on the model. Traditionally this involves many compounding many partial derivatives, but as I used a single layered neural network I was able to get around it with some tricks to simplify the calculations. Without getting too into the math, backpropagation works backwards through the network, adjusting the weights of each node depending on how the program did. If it was fed the current game data and produced an action that led to a good outcome, that is rewarded and the node is changed in a way that encourages making that decision again in a similar circumstance. The opposite is also true, making a decision in a specific state that leads to a bad outcome is penalized. In this way, the network can compile information about how it does and change accordingly to (very slowly) become a pong master!

This was the basic workflow of my program. It just ran over and over, gradually “learning” how to play pong and recording how it did. It did simple bookkeeping like recording stats and storing its current model every 100 iterations, and just kept chugging along and slowly improving. Now I should mention one thing about the exact definition of an iteration. I trained two models actually, one with an iteration being counted as a point, and the other with an iteration being counted as a game. They both saw little improvement in the first few days, but the first model was the one that in the end had massive improvements while the second model ended up stagnating and not really improving very quickly. As such the demonstrations I’ll be showing are for the first model as that one ended up much better.

Let’s see how it does! (apologies for the potato quality 🙁 , I had to run this on ubuntu as open ai gym doesn’t like windows and couldn’t get a screen recorder working on my virtual machine)

So here is the model without any training. It initializes completely random weights so as you can see in the window the paddle just moves randomly. Sometimes it seems like its following a pattern and rallying with the opponenet, but thats all just variance and you quickly realize that there is no pattern there.

Heres the model after about 2 days training. Its still very random number dependent and doesn’t look like its made any major improvements over the initial model. At this point I was honestly extremely worried. I made a few changes to the code and started training separate models to hedge my bet in case this one was a total dud that wouldn’t improve at all.

Eureka! Here’s the model from 4 days after. While it still isn’t beating the computer, it has made some massive improvements. It can more reliably return the ball and can actually score points that aren’t a fluke! This improvement was massive over the previous model.

Here is the model’s current state. It has been training for a little more than 5 days now and while it still hasn’t beaten the computer, its definitely developing some strategies to get there. I’m extremely proud of its progress thus far and will continue having it open and training until it can actually beat the computer and eventually do so consistently.

So let’s discuss a little bit more about what we just saw. The model has learned how to return more consistently, but if you look at its scores, they all come from “smashes”. In pong, returning with the edge of the paddle increases the speed of the ball and reverses the angle of trajectory, while returning with the middle of the paddle gives a slower return with a simial mirrored trajectory. The points that the model is scoring all come from these smashes. Here we’ve witnessed it really “learn” something and start to adopt a strategy, and the reason for this makes a lot of sense.

When we talk about machine learning we don’t really talk about learning in the way that humans do it yet. Machine learning is based on doing the same thing over and over again until it figures out the right method. Humans on the other hand use a lot more contextual information and adapt from their previous experiences, I don’t need to walk into a wall 100 times just to find a door in a new house I’ve never been in. Machine learning is about giving a computer a set of constraints, a set of available actions, and a fitness score. This fitness score is a measure of how good its doing, as it does more desirable things, its fitness goes up. So the model just crunches numbers to figure out how to get the highest probability of doing something that makes the fitness score go up in any given amount of states or similar scenarios. The only fitness score I implemented was based on actual game score, and here lies the problem. The model’s opponent is REALLY good. The opponent can measure where the ball is going to be and will always move there provided it can move fast enough (it does have a constraint on movement speed so it can’t just teleport).

When you or I start playing pong, our goal is just to return the ball over and over again, eventually we might figure out some nuances, but our goal is just to not let the ball get past our side. The model however, doesn’t care as much about not losing as it does about winning. Not losing is more a side effect, so it keeps doing things until it scores points. The only way to really score against its opponent is to smash the ball and get it moving quicker than the opponent can move there. Once this sequence of actions happens often enough and is rewarded accordingly since it scores points, the model will fall into this strategy. Eventually, I’m pretty sure the model will just figure out how to get a single return on the initial serve, and then smash the next pass so the opponent can’t get it. Now I could fix this by including a secondary fitness that tracks rally time, giving a better fitness to properly returning the ball. This would allow it to seem more like a human playing the game.

Overall, I am extremely satisfied with the results of my work. The model still has a while to train before I would call it successful, but watching it gradually grow and waking up to it learning a new trick or strategy is extremely rewarding. I will be continuing this project and gradually scaling up into a larger more experimental work for my final.

 

 

 

NOC – Forces Homework – Sam Arellano

For this week I used forces to make a pong game. This game will be my starting point for my midterm project which will be training a neural network to play this game in 2 ways, first against a computer, and second with 2 neural networks learning from eachother at the same time. The second is totally experimental and might just not work, but I will do it for the case of science.

For this week’s project however, implementing it was pretty simple. I took code from the bouncing ball example and then adding two new classes, the player class and the computer class. The player took simple user input that moves the bumper up and down. The computer class has some rudimentary AI included that needs to be improved (the big issue is that it doesn’t calculate for bounces, only for future y position of the ball, so if the y position goes off screen it doesn’t calculate correctly). I then added to the ball class to make it properly respond the bouncing off the paddles and to reset its position when going off the screen.

There are a lot of things that could be improved, such as adding extra features, a two player mode would also be quick to implement, but I just wanted to start working on a backbone for my midterm to build upon.

"use strict"

var ball;
var player;
var computer;
var down = false;
var up = false;
var force;
var outSide;
var p1score = 0;
var p2score = 0;


function setup() {
  createCanvas(1000, 600);
  background(0);
  ball = new Particle(width / 2, height / 2);
  player = new Player(19*(width/20),height/2);
  computer = new Computer(1*(width/20),height/2);
  // let's give a random velocity
  ball.vel = createVector(5,-5);
}

function draw() {
  background(0);
  fill(255);
  textSize(32);
  text(p2score, 100,10,30,30);
  text(p1score, 800,10,30,30);

  if(down){
    force = createVector(0,1);
    player.applyForce(force);
  }
  else if(up){
    force = createVector(0,-1);
    player.applyForce(force);
  }

  player.checkBoundaries();
  player.update();
  player.display();

  if(computer.prediction(ball.pos.x,ball.pos.y,ball.vel.x,ball.vel.y)){
    force = createVector(0,-1);
    computer.applyForce(force);
  }
  else{
    force = createVector(0,1);
    computer.applyForce(force);
  }

  computer.checkBoundaries();
  computer.update();
  computer.display();



  ball.checkCollision(player.pos.x,player.pos.y);
  ball.checkCollision(computer.pos.x,computer.pos.y);

  outSide = ball.checkBoundaries();

  if(outSide === 1){
    p1score++;
  }
  else if(outSide === -1){
    p2score++;
  }

  ball.update();
  ball.display();


}

function keyPressed(){
  if(keyCode === UP_ARROW){
    up = true;
  }
  else if(keyCode === DOWN_ARROW){
    down = true;
  }
}

function keyReleased(){
  if(up === true){
    up = false;
  }
  else if(down === true){
    down = false;
  }
}

//ball

class Particle {
  constructor(x,y) {
    this.pos = createVector(x,y);
    this.vel = createVector(0,0);
    this.acc = createVector(0,0);
    this.dia = 30;
  }
  update() {
    this.vel.add(this.acc);  
    this.pos.add(this.vel);  
    this.acc.mult(0);        
  }
  display() {
    push();
    translate(this.pos.x, this.pos.y);
    noStroke();
    fill(255);
    ellipse(0,0, this.dia, this.dia);
    pop();
  }
  checkBoundaries() {
    // x
    if (this.pos.x < 0) {
      this.pos.x = width/2;
      this.pos.y = height/2;
      this.vel.x = random(4,10);
      this.vel.y = random(-4,-10);
      return(1);
    } else if (this.pos.x > width) {
      this.pos.x = width/2;
      this.pos.y = height/2;
      this.vel.x = random(-4,-10);
      this.vel.y = random(4,10);
      return(-1);

    }
    // y
    if (this.pos.y < 0) {
      this.pos.y = 0;
      this.vel.y = -this.vel.y;
    } else if (this.pos.y > height) {
      this.pos.y = height;
      this.vel.y = -this.vel.y;
    }

    return(0);
  }
  checkCollision(locX,locY){

    if((this.pos.x > locX) &&(this.pos.x<locX+30) &&(this.pos.y > locY) &&(this.pos.y < locY+250)){
      console.log("collision");
      this.vel.x = -this.vel.x;
      this.vel.mult(1.1);
    }
  }
  applyForce(f){
    this.acc.add(f);
  }
}

//player

class Player {

  constructor(x,y){
    this.pos = createVector(x,y);
    this.vel = createVector(0,0);
    this.acc = createVector(0,0);
  }

  update(){
    this.vel.add(this.acc);
    this.pos.add(this.vel);
    this.acc.mult(0);

  }

  display(){
    push();
    translate(this.pos.x,this.pos.y);
    noStroke();
    fill(255);
    rect(0,0,30,250);
    pop();
  }

  applyForce(f){
    this.acc.add(f);
  }

  checkBoundaries(){
    if (this.pos.y > height-250) {
      this.pos.y = height-250;
      this.vel.y = 0;
    } else if (this.pos.y < 0) {
      this.pos.y = 0;
      this.vel.y = 0;
    }
  }




}

//computer 

class Computer {

  constructor(x,y){
    this.pos = createVector(x,y);
    this.vel = createVector(0,0);
    this.acc = createVector(0,0);
  }

  update(){
    this.vel.add(this.acc);
    this.pos.add(this.vel);
    this.acc.mult(0);

  }

  display(){
    push();
    translate(this.pos.x,this.pos.y);
    noStroke();
    fill(255);
    rect(0,0,30,250);
    pop();
  }

  applyForce(f){
    this.acc.add(f);
  }

  checkBoundaries(){
    if (this.pos.y > height-250) {
      this.pos.y = height-250;
      this.vel.y = 0;
    } else if (this.pos.y < 0) {
      this.pos.y = 0;
      this.vel.y = 0;
    }
  }

  prediction(ballX,ballY,balldx,balldy){
    var time = abs(ballX - this.pos.x) / balldx 
    var newY = ballY + time * balldy;
    if(newY < this.pos.y){
      return(0);
    }
    else{
      return(1);
    }
  }


}


Cerecares Trip Response – Sam Arellano

Cerecares Trip

This trip was really enjoyable. My aunt is disabled and has been through different day programs and care facilities so I’ve had experience going to them and seeing them before. Thankfully, the majority of the ones I’ve seen have been happy places where she was given good opportunities to work on tasks, learn skills, and interact with others, but I’ve also come in contact with some of the facilities that create the stigmas against care homes. The kind of places where the tenants aren’t adequately taken care of, where the facility is understaffed, tenants are ignored, the location is filthy and so are the tenants. By no means is this is the fault of the tenants staying there, and it isn’t the fault of the caretakers either most of the time, they are often overworked and underpaid. But regardless of where the blame lies, it was a great surprise that Cerecares was nothing like one of those institutions.

Walking through the halls, it was easy to see that the staff and organizers loved the students. Their work was on the walls to show off, the way that staff and volunteers looked at the kids with genuine pride when the student was able to complete a task. The kids weren’t being looked at as a burden or a task to take care of, but as legitimate students that were learning new things and had bright futures ahead of them along with great personalities.

The story of the whole facility was also very heartwarming. The first video they showed about the founder’s own life journey and her eventual opening of Cerecares was pretty inspiring. That being said, the other promotional materials felt a little, patronizing? They felt like “suffer porn” where people can look at someone else’s situation and be like “well I might have it bad but at least I’m not them”, or to put in a overly inspirational light that just comes off as ingenuine. I’m not sure the exact way to phrase it, but they felt uncomfortable. The story of the boy being adopted was framed as a story of him overcoming hardship with a closed happy ending, but even the “happy ending” felt at best bittersweet, at worst very depressing and not even in his best interests. And the music montage video was Sarah McLachlan singing over dying puppies levels of manipulative. We did discuss on the bus how this is rooted in cultural differences between China and the United States, but even so, it still feels over the top and against the essence of the facility’s mission statement.

Awkward promotional materials aside, It was a great trip and its easy to see the facility is making great impacts in the students lives. I hope that it can get the funding it deserves to continue making a difference and helping these individuals learn the skills they need for more independence.

Assistive Tech Week 3 Response – Sam Arellano

How to switch adapt a toy

This video surprised me at just how simple the entire process was. I primarily work with software, and honestly, messing around with hardware often intimidates me. I know it shouldn’t, but theres just something so malleable and low stakes about coding. Arduino and breadboards are fine, but once it gets to soldering or messing with existing wiring, I don’t think of that as my domain. This video showed just how simple, and moreover, cheap it is to do. When I see the toys that are usually sold for disabled people and the equipment its usually in the hundreds of dollars, when this toy could be put together with 10 dollars and 15 minutes of time. Even if its not a business idea, it seems like a great skill to have to be able to help people out.

Feelings meets Testing

This article was extremely interesting. I often take the current state of design for granted, assuming industry standards have just always been that way. Growing up in California everyone had their “lean startup mentality” and you were always running into people putting together prototypes and MVP’s, so its easy to think that this is the way its always been as opposed to a new development. This article went more into how prototyping has developed, specifically when it comes to creating an experience of usage for your testers. The other point of interest is just how recently a lot of these developments have occurred. The focus on usage and user interface design really only seems to have took off in the past two to three decades. I feel this is a great development not just for disabled individuals, but for everyone. Calling back to the curb cut effect, this new way of designing and prototyping just naturally leads to a better user experience. What good is form and features if the actual user experience just sucks? This then leads me to thinking how much more will change in just the time of my own career. Who knows what the next big development to come is.

 

NOC Inspiration: MarI/O – Sam Arellano

One thing that has really interested me is how neural networks are kind of the inverse of what we’ve learned about nature of code so far, yet they do go hand in hand. Nature of code is very focused on using code to emulate natural behaviors we observe around us, while neural networks use natural concepts to improve the performance and end result of different algorithms. Neural networks were massive paradigm shift in terms of the work being done in machine learning. The general idea is to try to emulate the way that our brains learn new concepts with different individual neurons that interact with one another and influence eachother. This allowed for the advancement of what many know as deep learning, very deep complex layers of these neurons that convert information into a usable format to teach the computer how to make decisions. We hear about many applications of machine learning, finance, biology, customer interaction, natural language processing, but there are some really fun ones too. My piece of inspiration I chose is one such really fun example of neural networks that shows its applications in video games of all things.

In this video, video game youtuber and computer scientist SethBling shows off a program that he made to train a computer to beat a level of Super Mario World. It uses many natural cconcepts you would expect to hear in a biology class, evolution, natural selection, mutation, species, yet all of these things refer to iterations of code that develop over time and teach the computer how to make progress. Its an extremely interesting video that I feel takes another look just what we can be aiming for with this class.

NOC Week 3 Assignment – Vectors – Sam Arellano

For this week I worked on building on top of more examples from the book. I’m extremely interested in making systems of autonomous agents and eventually want to use them for my midterm and final so I read up to those chapters in the book and worked on implementing the flocking example in my own code.

Once I had the example from the book done, I had to figure out a way to have more external forces act on the individual agents. The first one to pop to mind is some sort of wind force so I made it so that when the user pressed a key, there would be wind that pushed all the individual agents to the right. Having only one direction of wind seemed pretty boring however, so I allowed the user to press any directional key and have corresponding wind force.

The last thing I did to expand upon it was to make it so that when you click the mouse, the wind partitions the screen and doesn’t affect all the agents. For example, if you click the mouse and press the right arrow key, only agents on the right of the mouse will be affected. I felt this feature could be used for some further applications in the future when I continue working on giving more depth to this system.

There are a lot more things I could implement, but I think it works fine as a demo for now. The only thing I do want to address is I feel my method of checking for user input is extremely clunky and inefficient. There are definitely better ways to do it, but this doesn’t cut into performance issues so long as I keep the number of agents under around 150.

var flock;
var rightWind = false;
var leftWind = false;
var upWind = false;
var downWind = false;
var pressed = false;

function setup() {
  createCanvas(640,360);

  flock = new Flock();
  for (var i = 0; i < 50; i++) {
    var b = new Boid(width/2,height/2);
    flock.addBoid(b);
  }
}

function draw() {
  background(51);
  flock.run();
}


function Flock() {
  this.boids = [];
}

Flock.prototype.run = function() {
  for (var i = 0; i < this.boids.length; i++) {
    this.boids[i].run(this.boids);
  }
}

Flock.prototype.addBoid = function(b) {
  this.boids.push(b);
}



function Boid(x,y) {
  this.acceleration = createVector(0,0);
  this.velocity = createVector(random(-1,1),random(-1,1));
  this.position = createVector(x,y);
  this.r = 3.0;
  this.maxspeed = 3;
  this.maxforce = 0.05;
}

Boid.prototype.run = function(boids) {
  this.flock(boids);
  this.update();
  this.borders();
  this.render();
}

Boid.prototype.applyForce = function(force) {
  // We could add mass here if we want A = F / M
  this.acceleration.add(force);
}

Boid.prototype.flock = function(boids) {
  var sep = this.separate(boids);
  var ali = this.align(boids);
  var coh = this.cohesion(boids);
  sep.mult(1.5);
  ali.mult(1.0);
  coh.mult(.5);
  this.applyForce(sep);
  this.applyForce(ali);
  this.applyForce(coh);
  if(pressed){
    if(rightWind){
      var right = createVector(random(.02,.2),0);
        if(this.position.x > mouseX){
          this.applyForce(right);
        }
    }
    if(leftWind){
      var left = createVector(random(-.2,-.02),0);
        if(this.position.x < mouseX){
          this.applyForce(left);
        }
    }
    if(upWind){
      var up = createVector(0,random(-.2,-.02));
      if(this.position.y < mouseY){
        this.applyForce(up);
      }
    }
    if(downWind){
      var down = createVector(0,random(.2,.02));
      if(this.position.y > mouseY){
        this.applyForce(down);
      }
    }
  }
  else{
    if(rightWind){
      var right = createVector(random(.02,.2),0);
      this.applyForce(right);
    }
    if(leftWind){
      var left = createVector(random(-.2,-.02),0);
      this.applyForce(left);
    }
    if(upWind){
      var up = createVector(0,random(-.2,-.02));
      this.applyForce(up);
    }
    if(downWind){
      var down = createVector(0,random(.2,.02));
      this.applyForce(down);
    }
  }
}

Boid.prototype.update = function() {
  this.velocity.add(this.acceleration);
  this.velocity.limit(this.maxspeed);
  this.position.add(this.velocity);
  this.acceleration.mult(0);
}


Boid.prototype.seek = function(target) {
  var desired = p5.Vector.sub(target,this.position);
  desired.normalize();
  desired.mult(this.maxspeed);
  var steer = p5.Vector.sub(desired,this.velocity);
  steer.limit(this.maxforce);
  return steer;
}

Boid.prototype.render = function() {
  var theta = this.velocity.heading() + radians(90);
  fill(127);
  stroke(200);
  push();
  translate(this.position.x,this.position.y);
  rotate(theta);
  beginShape();
  vertex(0, -this.r*2);
  vertex(-this.r, this.r);
  vertex(this.r, this.r);
  endShape(CLOSE);
  pop();
}

Boid.prototype.borders = function() {
  if (this.position.x < -this.r)  this.position.x = width +this.r;
  if (this.position.y < -this.r)  this.position.y = height+this.r;
  if (this.position.x > width +this.r) this.position.x = -this.r;
  if (this.position.y > height+this.r) this.position.y = -this.r;
}


Boid.prototype.separate = function(boids) {
  var desiredseparation = 25.0;
  var steer = createVector(0,0);
  var count = 0;
  for (var i = 0; i < boids.length; i++) {
    var d = p5.Vector.dist(this.position,boids[i].position);
    if ((d > 0) && (d < desiredseparation)) {
      var diff = p5.Vector.sub(this.position,boids[i].position);
      diff.normalize();
      diff.div(d);
      steer.add(diff);
      count++;
    }
  }
  if (count > 0) {
    steer.div(count);
  }

  if (steer.mag() > 0) {
    steer.normalize();
    steer.mult(this.maxspeed);
    steer.sub(this.velocity);
    steer.limit(this.maxforce);
  }
  return steer;
}


Boid.prototype.align = function(boids) {
  var neighbordist = 50;
  var sum = createVector(0,0);
  var count = 0;
  for (var i = 0; i < boids.length; i++) {
    var d = p5.Vector.dist(this.position,boids[i].position);
    if ((d > 0) && (d < neighbordist)) {
      sum.add(boids[i].velocity);
      count++;
    }
  }
  if (count > 0) {
    sum.div(count);
    sum.normalize();
    sum.mult(this.maxspeed);
    var steer = p5.Vector.sub(sum,this.velocity);
    steer.limit(this.maxforce);
    return steer;
  } else {
    return createVector(0,0);
  }
}


Boid.prototype.cohesion = function(boids) {
  var neighbordist = 50;
  var sum = createVector(0,0);
  var count = 0;
  for (var i = 0; i < boids.length; i++) {
    var d = p5.Vector.dist(this.position,boids[i].position);
    if ((d > 0) && (d < neighbordist)) {
      sum.add(boids[i].position);
      count++;
    }
  }
  if (count > 0) {
    sum.div(count);
    return this.seek(sum);
  } else {
    return createVector(0,0);
  }
}



function keyPressed(){
  if(keyCode === RIGHT_ARROW){
    rightWind = true;
    console.log("wind on");
  }

  if(keyCode === LEFT_ARROW){
    leftWind = true;
    console.log("wind on");
  }

  if(keyCode === UP_ARROW){
    upWind = true;
    console.log("wind on");
  }

  if(keyCode === DOWN_ARROW){
    downWind = true;
    console.log("wind on");
  }

  return false;
}

function keyReleased(){
  if(keyCode === RIGHT_ARROW){
    rightWind = false;
    console.log("wind off");
  }
  if(keyCode === LEFT_ARROW){
    leftWind = false;
    console.log("wind off");
  }
  if(keyCode === UP_ARROW){
    upWind = false;
    console.log("wind off");
  }
  if(keyCode === DOWN_ARROW){
    downWind = false;
    console.log("wind off");
  }

  return false;
}

function mousePressed(){
  pressed = true;
  return false;
}

function mouseReleased(){
    pressed = false;
    return false;
}

Assistive Technology Week 2 Readings

New York Has a Great Subway if you’re not in a wheelchair

In this piece a Google software engineer discusses the challenges he now faces as a wheelchair user in getting around New York by subway. The really interesting thing to me about this piece is it very well captures the overarching theme of this week’s readings, people aren’t often malicious towards the disabled population, they just don’t care. The author himself states how he only realized these challenges after requiring a wheelchair himself, issues like only a fifth of the stations having elevators, those elevators being constantly under service, and no real way for someone to know this is the case until they get there. This is just another case of the disabled population being sadly forgotten, but hopefully work can be done to fix this issue and improve accessibility in New York.

Are Colleges Doing Enough to Make Videos Accessible to the Blind?

Continuing the trend of the forgotten, I honestly wouldn’t have even thought these videos were an issue had I not read this article. Yet those individuals that are affected definitely realize the problem. I feel a lot of these problems, rather than being about blatant stigmas or issues of hate simply stem from uneducation on these topics. When it comes to the monetary cost of making these internet videos accessible to the blind, I’m certain that it would be less if these considerations were included in the creation process than in trying to go back and remedy those issues after the fact. Foresight is a better plan in these situations.

Becky, Barbie’s Wheelchair Bound Friend

This whole story just makes me sad. This can’t be attributed to just forgetfulness. They obviously released this toy with a specific demographic in mind. It would be one thing if there was just no demand for the product and so they discontinued the line, but there was obvious demand and happiness over the release of Becky. What happened seemed more like the company just decided it wasn’t worth the trouble. While I can blame individuals for just being forgetful, once it gets to an organizational and company level, it becomes a discussion of whether the investment will pay off. Are these people worth it? In the case of Becky, the company decided they are not.

From Charity to Independent Living

This is an amazing story that I’m honestly surprised isn’t told more in schools. I heard nothing of this throughout any of my history books, not even in California history with as much as California loves to show off how progressive it is. I knew that there was a disabled rights movement in the same time frame of the black civil rights movement and the gay rights movement, but that’s about as far as my knowledge went. I knew of big activists like Martin Luther King Jr., Malcolm X, Harvey Milk, but I had never heard of Ed Roberts or Judith Heumann, let alone the massive amount of change they made. I have to wonder why this isn’t more widely included in our educational materials, but its probably because content space is limited, and once again, the disabled community just isn’t worth it.

A Brief Historical Review of Rehabilitation Practices

It is a little disheartening that it seems we can only move the public to action when we put on the tag of “help our veterans” and even then it doesn’t always work. It is good that there have been massive amounts of development for the disabled community, but it seems to just be aimed at soldiers and then trickle down to every day people. Are these problems only valid enough to be solved if they’re happening to a soldier that was hurt on the front lines and not your neighbor who was born into this world with that same condition? This is probably just me soapboxing too much, but regardless, it is good to be informed of another history I didn’t know much about.