Staging Fright Final documentation, David Santiano



My team, consisting of Abiral, Nick, and Sam, set out to create a horrifying experience, one that was truly frightening in nature, one that would set the stage to scare the living crap out of many a participant. So, we created Dr. Fakhr’s Museum of Mediocre Artefacts, a truly horrifying museum of dazzling and devastating delights.


moma entrance

We initially set out to create an experience that would be embedded with sensors and solenoids, and lights and sounds that would scare the bejeezus out of many a participant, but the more we played around with the space the more that we realized that we could simplify it to something that has withstood the test of time, a dark room paired with a creepy guy in a mask. Of course, the experience was a bit more than that, but that was the essence of the whole experience.


We were assigned the lockers, and we decided to first start moving them into an arrangement that could be deemed as claustrophobic:

initial setup

You’ll also notice that the lights in the photo are blocked out, covered by cardboard, which was a necessity to the whole experience as it was imperative that we had no light leaking into our staging area.

My involvement with the final project was mostly with the setting up of the stage, I helped drape the curtains on top of the green screen frame, and also helped extend the frame a bit further with the plastic pieces reinforced by spare wood.

Greenscreen frame not long enough? Just find some wood and plastic.

Using this we were able to construct a bigger frame that would suit our absence of light needs.



Next, we started draping curtains over the whole entire thing, aiming to create a space that was both claustrophobic, and extremely dark.

WordPress’s image editing software is broken, so it’s a bit flipped right now

dark corridor

We secured these curtains with magnets:


Pretty effective! The extent of the work that I personally did on the project was just getting the space set up, getting rid of all light leaks, and also helping with the blocking of the experience. I also acted as the main scare guy, dressed up in a cloak and mask, and hid behind a curtain in the back of the lockers. It was hard to video the experience as it was pitch black inside of our space, but we found great success as we were only able to go through about 10-20% of the people who signed up for the experience.

Trump Twitter Trasher, Network Everything, David Santiano

cheeto with that mold that sorta looks like hair but it’s not actually hair


The Trump Twitter Trasher is a photograph of Donald Trump that has been modified by me to spit out his inane tweets into the trash(which will then be recycled). It works by a server utilizing the Twitter API to grab tweets from Donald Trump’s twitter stream and then storing them into memory in an array, where the tweets are used to populate an html page served by the website and to also be selected to put into the trash.

The technology used was a node.js server running on the back-end, which served up an html page and also communicated with an Arduino via serial communication for the purposes of printing the tweets.

The basic flow of information was:

Twitter —> node.js server —> html page with buttons for selection —> back to node.js server       —> Arduino —> Receipt Printer —> Trash

The whole project was a bit challenging, as I’ve never touched back-end web development before, so terms like sockets and routing and POST and GET requests really flew around my head, so what follows is the giant list of links that I used to obtain some of the knowledge that I need(and may not have needed as well):


But, it was a great learning experience, I feel very comfortable now grabbing information from a source and guiding it as it flows through the network I created…342A2408

… and then into the trash.

Pretty great! But, I’d like to smooth out this nice orange diamond a bit more. During the show I noticed that notifying people of the current status of the printer via an alert wasn’t the best, so I want to integrate the warning a bit better. Also, there is a weird encoding issue with the tweet, and I’m thinking it may have to do with escape characters present in the tweet and all that jazz, so I’ll be looking into that. ALSO, I need to find a way to put it on the www.

Here’s some of the code, the html file, the arduino file, and also the node server javascript file:


Measuring Fear Through Motion

I’ve decided to take an alternate route when it came to measuring fear. For me, fear is an emotion that is associated with motion. I move my hands to my eyes or my ears to suppress my senses, or my shoulders hunch up and I leap back when I encounter something that startles me. I wanted to capture these moments of sudden movement and fear through various digital means.

I first wanted to use an ultrasonic sensor to sense any changes in distance that my body had with my computer, just measuring how far away I was from the computer in centimeters.WeChat Image_20170420012745The whole setup, however, was a bit wonky and unstable, and I felt like I couldn’t extrapolate any meaningful data from what the ultrasonic sensor was giving me, so I decided to take an alternative route.

OpenCV for processing has a feature that allows the program to find any moving elements. It does this for the purposes of identifying moving foreground elements and separating them from unmoving background elements. The identification is done by outlining these foreground elements (moving and changing pixels) with colored contours that are composed of different points. The more movement, the more contours, and thus: more points. So, I programmed a processing sketch to take a screenshot of the sketch window whenever it detected a certain amount of movement, looking like this:


Red on black seemed like a suitable color choice, given the theme of the class. I started up one of my favorite horror games, Outlast, and started the processing sketch.


This was the first scare that I encountered and my sudden movement of my hand to my face was picked up by my processing sketch:



My sudden high pitched yelp was not picked up by the computer, however, but by the whole IMA lab, which resulted in many an underclassman and Jack B. Du staring at the weird and shaking noise-maker (me) at the corner of the lab. Overall, it was a big success! However, the sketch is reliant on whether the person playing or experience a scary event stays still until said scariness, so it may just be useful for me. Maybe it can be fine tuned depending on the person? Only time will tell. Here is the code:

Physical Networks!

A satellite dish, for connecting to a television network, or for connecting to an intergalactive space-alien network?????


Mobikes, Ofos, and various other brands of rent-a-bikes that will go out of business soon, all brands connected to a respective application! A gathering of one-to-many networks?

A security camera, just one of many connected to Chairman Mao’s brain jar, where he uses his psychic powers to monitor the populace for any civil unrest.DSCF8161

Network Everything Response Week 4, David Santiano

The technologies that were talked about in the When Things Start to Think paper were all certainly very interesting, but the one that really struck me as the future was the conceptualization of the possibilities of the Personal Area Network. Starting off, I think it’s amazing that it started off as a bug in the system that they were first building up; scientific discoveries happening by accident always gives me a bit of a buzz. The possibilities that were talked about with the PAN seem to be almost endless, the ideas about small electrical currents opening doors, or business cards being automatically transferred with a simple handshake were REALLY COOL. Making the integration of technology in our everyday life as seamless and natural as possible is a very admirable endeavor, and I do agree with the article that the benefits of it certainly outway the disadvantages that would be brought about by it. What I started to imagine while reading was a way to personalize and customize your PAN, making your own body interact with your environment in different ways. Augmenting your own gestures and sending different signals to certain object could help streamline things and make things a bit more intuitive, maybe buildings or rooms that require a unique identification can just use your own unique electric signal to grant you access. Your personal preferences for different pieces of software you use can just be stored along with you, allowing you to use any workstation you need to with it automatically customized for you. Very simple ideas, but I’ll be letting it stew around in my head for a bit to find more creative ways to utilize it.

However, despite my visions of the future concerning the PAN, I don’t find Steve Mann’s cyborg augmentations to be particularly alluring or attractive. I get why he is doing it, and I admire his work as the father of wearable computing, but placing all those devices on my person and in my person seems a bit too intrusive for me, also it just doesn’t look good. If I am going to be making myself a cyborg I want it to not just be useful but also cool looking. I’ll admit it, he just doesn’t look cool. The modifications to his sense of sight were extremely fascinating. Simulating how a certain animal would see the world and other such uses could offer new perspectives on certain things, and also may hold use on a situation by situation basis. Maybe seeing like an insect could help with sports? There are tons of possibilities!

IMA Capstone, A Vote For The Future, David Santiano, my work thus far

So, like every single IMA project that I have undertaken, my end vision of my entire project has changed a tiny bit, mostly because of extra-curricular undertakings and the dreaded JOB SEARCH(helpmepls).

After a lot of helpful critiques received during the IMA capstone sessions, I have decided to make a few modifications to my project as a whole. Instead of running the project for many weeks and gathering data from people interacting with it, I want to focus in more on the actual experience that the user has with my voting terminal to another world. So, in order to do that I am keeping the experience all in the same place. My modifications are as such:

  1. The identification will not just be purely web based anymore, but will be given to the voter on-site through the form of a thermal receipt printer. In doing this I hope to keep the experience as simple as possible and improve the tangibility of it all as well. I’ve come to learn that people like getting cool things so that is the latter half of my reasoning for it.
  2. I will be formulating at least three different scenarios for people to vote on, each one holding the common theme of restricting voting privileges based on life factors that are out of one’s control. I’ve come to realize that the vote suppression based on these life factors is the essence of the experience itself, it is where I am making my main commentary, and it is also the point where I can entice the most emotion out of the user interacting with the voting terminal.

After making these decisions I have thought of a new way to structure the technical side of my project. I’ll still be working on the character generator in p5.js and will continue to prettify it a bit more, but I will also be using the randomly generated data there to create JSON data, which will then be placed into a mySQL database, which will then be read by an Arduino and subsequently printed out for the user to have.

The randomly generated ID will be used to prevent people from voting twice and it will also serve as a unique identifier as well. I chose formatting things into JSON data because having all of the person’s attributes labeled and ready to parse would make things much easier for the Arduino to read and then print. Also, I’m thinking I can get it to read via WiFi and send the data to the printer via serial. This way my serial connections don’t get bogged up with all the data. One clear channel for each function. Here are all my resources thus far in the form of a massive brain dump:

Democracy for Realists (Achen and Bartels)
Dictators Handbook (Beuno de Mesquite and Smith)
Search terms: empathy games, sociology, anthropology, art & art history, biological sciences, communication sciences, education, performing arts, political science, psychology, sociology
Bogost, Ian. “Empathy.” How to Do Things With Videogames, NED – New edition, vol. 38, University of Minnesota Press, 2011, pp. 18–23,
Simpson, Joseph M., and Vicky L. Elias. “Choices and Chances: The Sociology Role-Playing Game—The Sociological Imagination in Practice.” Teaching Sociology, vol. 39, no. 1, 2011, pp. 42–56.
Oppenheim, Walter. “Complex Games and Simulations in Schools.” Teaching History, no. 34, 1982, pp. 26–27.
Kocurek, Carly A. “Tabled for Discussion: A Conversation with Game Designer Michael De Anda.” QED: A Journal in GLBTQ Worldmaking, vol. 2, no. 2, 2015, pp. 151–172.
Empathy Games/Drawing Inspiration:
Dog Eat Dog
Weber’s Theory For Social Class:

ARS Week #2 Assignment David Santiano

generic generic (1)


Ah the dreaded sloped ground that many a professor, student, or human being has had to traverse on their way to the IMA lab. There is no way to avoid it! And the ugly yellow color both grabs my attention and hurts my eyes. In order to improve upon this ugly sign, I have decided to allow people to find a neat graph, conveniently found on that teaches students everywhere about the slope formula.

Blippar thing

The augmented reality process using Blippar

Week 3 Response David Santiano

The first two chapters of John Palfrey and Urs Gasser’s The Promise and Perils of Highly Interconnected Systems does a great job in highlighting the amazingly complex and near-miraculous technical network that surrounds us. The interoperability between the big three tech companies, Google, Microsoft, and Apple,  is something that I have only come to recently appreciate. I still remember the days of having to install certain pieces of software if I was using a certain browser, or having to delve into system files and manuals in order to get one thing to work with another. The recent strides that have been made to make our things communicate more seamlessly and sensibly have been large, and I greatly appreciate that. BUT, the most important take away from the reading, at least from my perspective, was reminding us of the human element of interoperability; all of this communication between various pieces of tech are useless if the persons operating these technologies are not able to communicate properly.

Relating this to what we are learning in class, the most obvious takeaway is that when dealing with making devices communicate with each other we should make sure that communication is going smoothly at all layers of communication. And in order to do this, we need to have a good understanding of what is going on in all those layers of communication. Electrical signals are simply another language after all, with its own forms of nuance and syntax. The devices that we are going to be dealing with all have their own little quirks that we have to get know and experience in order to make them communicate well. There is an underlying logic to it all, and I think we are well on our way to grab the basic understanding of it all.

That being said, the reading has definitely reminded me to keep the human aspect in mind when dealing with these systems. When we get to the point of pointing complex networked systems towards other complex networked systems we have to remember that it is our duty to abstract that away and present an easy and accessible way for human beings to utilize whatever cool piece of tech we networked together. Otherwise, we aren’t really taking full advantage of the resources that are available to us, and it may also be a bit of a disservice to the thousands of engineers and designers who have done their best to hide all of that complex low-level stuff so that the average human being can have access to the cool shit that was previously fully utilized only by experienced technophiles.

Week 1 Homework Response, David Santiano

The other half that is important when it comes to augmented reality is the physical space in which augmented reality storytelling occurs. What I’ve seen so far from a lot of augmented reality projects is integrating the experience with already existing physical infrastructure. I think the future of quality augmented reality storytelling is the buildup of physical spaces that are tailored to augmented reality storytelling experiences. If we are to craft meaningful experiences, it might be best to exercise full control over the spaces in which we augment reality. However, I am speaking in terms of the immediate future and I am also under the mindset of augmented reality storytelling as a medium similar to other visual forms such as movies or television.

My thoughts on the far future of augmented reality storytelling are much different. If we are to truly augment our reality, we need to do it in a way where it becomes the new reality. Large-scale change like this is something that may be a bit of a pipedream, but this class exercise is allowing me to be a bit grandiose with my vision. I envision a seamless and integrated augmented reality experience with the physical world we currently encounter. Computer graphics and effects are becoming increasingly photo-realistic, processing power increases every year, and devices growing smaller. If the rate of progress continues, I see a world in which we can modify our existing realities to suit our own needs. If you don’t like the color of the sky, change it. If you want to add a filter to change how you see the environment, you can do it in a believable and immersive way. The interfaces we use right now would be relegated to the role of an antique. Our reality will be augmented with an information granting interface, we are in full control of it, and we can access it whenever we want. The social, cultural, and psychological impacts of this would be substantial, so I’m not confident enough to delve into that without massive amounts of speculation.


You come across two drinks. Which do you choose to drink?


Ah you drank the coffee! Behold! A coffee wonderland!


Ah you drank the water! Behold! A water wonderland!


A portal glimmers in the distance. You decide to walk towards it and see where it goes.


A portal glimmers in the distance. You decide to walk towards it and see where it goes.


You are back, do you wish to drink from the other cup?