Final Project: Dr Jingles Fakhr (with Sam Hu, Dave Santiano, and Nick Sanchez)

We started with location allotments, and we were allotted the space at the end of the hall on the 8th floor where the lockers were. We decided immediately (during the class when location allottments were made) upon a basic storyline. We would cordon off the locker area with a curtain, and begin right outside of it. One actor would introduce the audience to hideous freak-show artefacts from various places, and then say, ‘But our most horrifying artefact is behind that curtain. Enter at your own peril!’ Once the audience member(s) went through, we had the vague idea to manipulate the lockers, have them open and close, and for objects to appear and disappear in the space – basically, for ‘something scary’ to happen.

Over the following two weeks, we researched and refined our ideas. As part of this process, I researched some of the stage mechanisms of scary theatre for inspiration. I found particular inspiration from some of the behind-the-scenes cuts of the long-running West End production of The Woman in Black (https://www.youtube.com/watch?v=KkLaY1DLTJc) and from the dramatic aesthetics of The Tiger Lillies’ puppetry (https://www.youtube.com/watch?v=TOVSp-fYUQc). I think we incorporated some of the former in our staging, and some of the latter in our text. Following research and discussion, we settled upon a story: we would be presenting the life and work of a failed inventor, Dr Jingles Fakhr, who was active in the late 1800s. After showing the audience the first couple of failed inventions from the Doctor, we would send them through to his ‘least obscure invention’ – the Perpetual Light Machine. The story went that Dr Fakhr had tried to use diamonds to make a light machine work – but in the course of working on it, he saw frightful visions and went insane. Other people have also seen visions and felt nausea when in contact with the machine, so we have had to keep it behind curtains. This was our general backstory.

As to the specific scares, we determined that there would be three phases. When the audience entered, there would be a museum exhibit, with the light-source flickering. The audience would be listening to an audio-guide. Stage two: the lights would die out, and in the darkness, a vision – a mannequin or dress-form – would appear. The lights would come back on. Stage three: the lights would go out again, and in the darkness, a second vision – this time an actor – would appear, and actively scare the audience. As is clear, pitch darkness became a necessity by this stage of the project. (A more complete description of the blocking is in the link below).

Post research, my first major part in the project was the writing up of the script and the organizing of theatrical blocking, which I did here: https://docs.google.com/a/nyu.edu/document/d/1jFW2mjvmIQvO66diFAfkDFXOkJdjBN6AI5NAh_E8fkE/edit?usp=sharing

The second part of setting up was the physical aspect. We moved the lockers to create a pathway that got narrow, to elicit a claustrophobic effect. We used used a number of curtains (fortuitously mis-ordered) to cover up the entire space, and a green-screen frame to set up an entrance. Finally we organized an backstage area from which we could operate. In the performance space, I was the theatrical announcer, David was the second vision, Sam handled the audio aspect, and Nicholas controlled the lighting and the movement of the first vision. The light contraption itself was modified for use from the project of Sun Jingyi, who built a bluetooth light-source for her Network Everything class.

(Pictures to come)

Project 2: Scare Your Computer (with Nick Sanchez)

Scare your computer. Using Arduino with Serial communication to: Processing, Max/MSP with Jitter, or Isadora, incite a fear response from your computer (e.g., Trigger a video of a screaming person when you come into the frame, turn off the lights or play a loud sound).

We began by thinking about the wording of the question: ‘Scare your computer.’ What makes a computer afraid? And what does it look like when a computer is afraid? We speculated that a plausible answer to the second question was that a computer might turn off in fright – in the same way a person might freeze in terror or faint in shock. And finally, we thought that what might scare a computer might be violence upon computer hardware – in the same way gore and violence upon the body would scare a person. So we had our basic outline: scaring a computer to the point of turning off by committing violence upon other computer-like bodies.

My main contribution to this early outline was to write up a script and backstory: an ambiguous trope-heavy piece where the AI revolution fails and is quashed by human overlords. Our computer would be an AI rebel, captured and tortured by the humans (us) in order to acquire some important codes. We then decided on a ‘face’ for the computer, settling on HAL-3000 from 2001: A Space Odyssey. We decided that the ‘scaring’ would progress in three steps: resistance, acquiescence, and terror. So we would demand the codes from the AI, and show the gory remains of her compatriots – which would horrify the computer, but not elicit the desired response. Then we would step it up, smashing hardware before the AI, causing the AI to break down and give us the code. Finally, we would display the full extent of our sadism, inflicting harm on the computer even when there was no reason to do so.

We set about getting the basic materials for the computer’s ‘personality’ – the face (a stock image with some Photoshop manipulation, so that there were two images: one with the light turned on when the computer was speaking, and one with the light turned off when it was not) and the voice, for which we just used an online voice generator. Then, we went about figuring out the process for triggering a response. This went in two stages. Initially, we we interested in using vibration or pressure sensors in order to measure the computer’s ‘fear’ at the impact of our smashing. We made a little apparatus, essentially a stage we could set on a table and hit with a hammer, with a vibration sensor inside, which would register impact. However, it ended up being that the readings we were getting were far too erratic to be properly usable.

So in the end, we decided to simply make the computer move from one stage of fear to others, using a button. We used Max/MSP to move the computer’s visible state from one audiovisual display to another, such that the computer would respond to the push of a button to go from resisting giving up the code, to giving it up, to turning off. This was the most difficult section of the assignment, as neither of us were particularly adept at Max/MSP; with a lot of help from the help pages and a lot of fiddling around, we did manage to get the sequence going. Finally, we added some theatrical touches, and performed for the class. (This vocabulary is used advisedly: as Antonius pointed out, our final product was akin to a script-reading more than anything, unlike our original plan with the Piezo sensors.)

Staging Fright: Project 1

For my first project, I worked in Processing to track when people are shocked by jump-scares. Of the different methods of measuring fear that we discussed in class #1, I think it’s fair to believe that the ones which track body states – Galvanic response, heart-rate, etc – are probably the most reliable for tracking changes in states over time. For example, someone watching an extended scene that is ‘creepy’ or ‘unsettling’ would have variations in their body state tracked through such methods. However, several people pointed out later that the same changes in body state could also indicate responses other than fear – for example, excitement or stress. So for the project, I wanted to work with a somewhat more restricted domain of scares than general feelings of ‘fear’, and so I picked one that I think would be easily recognizable as fright (and less difficult to distinguish from other states like suspense or even restlessness). Hence, I picked jump-scares.

I thought one simple way to keep track of people responding (with fright) to jump-scares would be to track their movement – spasms of fright or the inclination to move away. To achieve this, I used OpenCV on Processing to identify the position of the viewer’s face, and then track sudden movements of the face. I began with the example LiveCamTest code, which simply detects faces and makes a rectangle around them. I noted that the coordinates for the position of the (corner of the) face was being identified by the sketch, on (faces[i].x) and (faces[i].y). Then, to track sudden movements, I nested a loop that would run if the x- or y-coordinates of the face changed by more than a certain degree from one moment to the next; after testing a couple of times with different values as the minimum degree of change, as well as changing the difference to an absolute value, this seemed to work relatively smoothly. Every time a scare was detected, the message ‘SCARED’ would show up on the display screen, in between the tracking of the coordinates. This was the simple tracking of the ‘jumps’ in jump-scares.

livecamogcode

Next, as part of the recording element, I first used saveFrame() to have Processing take a screenshot every time a scare was detected. This would be a record of the number of times a scare was achieved, as well as (sometimes) a record of facial response. Finally, as a very non-scientific number to put to the picture, I added the (absolute) difference between the x-coordinates and the (absolute) difference between the y-coordinates, and had Processing place the number in the corner of the frame every time a scare was detected (so that this ‘scare rating’ would show up on the screenshots).

modcod

A couple of caveats:

  1. This method of measuring would not be able to record fright experienced in the sorts of scenarios mentioned at the beginning – ‘unsettling’ images, sounds, etc. It would have to be restricted to jump-scares.
  2. There are obviously reasons apart from fear why people would move faster than usual (including just shifting around) – unless they move relatively slowly and smoothly, the sketch will take snapshots.

Staging Fright: Films, Class 1, and Junji Ito

Part 1 – Freaks & Alien

To begin with a general point about Freaks and Aliens: one commonality that presents itself immediately is the fact that they are both about the unfamiliar. Freaks is about a group of people who are sufficiently different (or different-looking) to ‘normal’ people that they have been outcast into the circus, always on show, traveling at the margins of human habitation (emblematic, as Foucault says of the figure of the madman, of an inside-outside structure: always both on display and hidden away, or always both left out (of ‘normal’ life) and kept in (given spaces where they are ‘acceptable’)). Alien, even more radically, is about a life-form from some distant galaxy, previously unknown to humankind. The treatment of difference in these two films is quite different, but with regard to the medium and the technology itself, I think there are some similarities, which I will come to in a moment.

To make a segway into the technology used to engineer fear, one thing that’s significant to consider is how the camera in both Freaks and Aliens considers (and mediates) the line of sight of its audience. Alien, famously, is a film that relies on the horror of not seeing things – or specifically, not seeing the object of horror, which is the alien itself. Outwardly, it might seem as though Freaks operates on the opposite principle: that of showing, up close, the unfamiliar figures the audience is not used to seeing. However, I think this intuition would be misplaced: it is not the circus performers who are the objects of horror in the film, and hardly any of the scares the film produces is achieved through showing them up close. Instead, it would be useful to examine a section that does, in an obviously deliberate way, aim to create nervous anticipation: the opening scene, where the ringmaster’s proclamations about the horrifying nature of his ‘exhibit’ is complemented very conspicuously by the lack of its visual presence. So in both Freaks and in Alien, it seems that a sense of fright is generated by keeping the supposed object of horror out of sight.

The role technology plays in this is obvious: it is the use of the camera that gives the filmmaker authority over what an audience can and cannot see. This is interesting, because as a visual medium, film is thought to be premised on what is seen (or shown), but it seems that at least in these two films, a lot of the affective power is contained in the filmic subtext – in what remains unshown. If we are to think of terror as the anticipation of fright, it seems clear that the unshown is space within which terror resides; whether horror resides in the space of the shown is less clear.

One thing that seems important in this set of considerations is to think about what the framing of these objects of fright in film does. One clear way that a lot of horror works is film is by showing an unfamiliar object of horror: a ghost, or a monster, who jumps out at the audience. One might be inclined to think that in a certain way, the unfamiliar is being brought closer – that is, the distance between the unfamiliar (which one does not normally see) and the familiar (which one sees) is being closed by showing the unfamiliar, and that it is this tension which results in fright. But then how to account for the fright caused by the non-seeing in Freaks and Alien? This must be a different kind of spatiality. One suggestion with regard to Freaks is that the real object of horror is not the ‘deformed’ circus performers, but the ‘normal’ ones who are trying to kill them. Perhaps more convincingly, one could also postulate that the horror comes when the ‘freaks’, who have been thoroughly ‘humanized’ by the end of the film, set out on a mission to kill and maim their enemies – a sense in which something that has become familiar is defamiliarized. I am sympathetic to this second interpretation, partly because it is also cohesive for a reading of Alien: where biology – which is supposed to be ‘familiar’, intimate, and what which constitutes the audience members themselves – is turned into a site of danger, and is violated and mechanized beyond recognition.

Part 2 – Classwork

Much of the work we did in class during our first session had to do with the nuances of measuring fright. There were suggested methods that were broadly expected, like measuring the rate of one’s heartbeat or galvanic skin response, and some that I had not thought about; my favourite was the tracking of eye-motion to see how often someone’s eyes went off the screen (in frightful anticipation. This may not be the most reliable indicator in many cases, but I think it would be a great indicator for myself!).

After some introductory discussion, we set about making a galvanic skin response (GSR) reader using Arduino and Processing. The general idea was to use the body to complete a 5V circuit to the Arduino and track the electrical signal, thereby recording the skin’s conductivity (and hence, GSR). So the hardware aspect of the project was very straightforward: an Arduino and two sensors connected to a breadboard. Using Arduino, we simply read and compiled the data being detected by the sensors, and sent to over to Processing to be read (Serial myPort) and to enable the information to be collected in a text file.

Part 3 – Field Trip

The Augmented Reality Horror Manga exhibition presented a pretty interesting fusion of different technological media into the horror genre. Horror in manga or comic book form is, of course, a certain usage of technological medium, which relies typically on plot and imagery (and the different ways stationary imagery can be manipulated – the sizes of slides, the intrusion of (typically onomatopoeic) text, sometimes colour) to make its effect. The augmentation of comic book reality, using AR glasses (that included smartphones in this case), produced an effect that was more interesting than scary. It’s clear enough to see how moving imagery could add to the affective arsenal of comics – it might be far easier to produce jump-scares, for example. But I think it’s interesting to think about three things:

  1. What caused the exhibits to be ‘interesting rather than scary’ (or was that only me)? I think perhaps a key point is that the exhibits here are presented in isolation, rather than in the context of a story (which is also what makes comics scary: many comic frames in isolation might also lack the full affective quality they would otherwise have). What we saw, really, was a stripped down example of the potential of this medium.
  2. A well-developed comic intended for horror might often rely on elements that are tailored to stationary rather than moving imagery – such as a ‘horrifying’ detail in a noisy image, or other such images that allow one to dwell over the image. Can a moving image reproduce this kind of effect? Or would it naturally rely on different strategies to make an effect on its audience? There are plenty of scary moving gif images that rely on jump-scares; it would be interesting to look at ones that do not.
  3. If the affective strategies and effects created by moving images are radically different from those of stationary images (and more geared towards thinking about movement), then why would one use these images rather than simple video? What would be the qualitative difference that would make this AR project worth doing, rather than a video adaption of manga horror?

Final Documentation

I wanted to continue along the line of thought I had begun on my midterm project, where the idea was to use physical pieces to create a digital puzzle set. At the time, my set has used a whole lot of touch sensors, and been rather messy in general. So for the final, I wanted to keep with the idea of creating a digital puzzle set, but pare down on the hardware aspect as much as I possibly could.

img_20161212_061155

Accordingly, I thought it might be a good idea to use RFID tags as the physical part of the project – something that Luisa also suggested when an early vision of the project was presented in class. RFID tags are minimalist and would fit in with the minimum-hardware approach I was trying to take.

Hence, I acquired two RFID chips, an RFID receptor, and got to work with that. The idea was that through processing, I would create an empty grid, and then place digital pieces on the correct points of that grid. The program that was being created was to use the tags to have two functions: one tag would change the piece in a certain square, and the other would be used to ‘place’ the piece.

The Arduino sketch was an updated version of something that Nicholas, a learning assistant, used a couple of years ago; he gave me the older version of his sketch, and I fiddled with it to match my purposes. The basic functionality was straightforward: the sketch would detect the ID on the tag, store this ID, and then write it onto the serial.

ardst

The Processing sketch, which I wrote myself, was tasked with first importing the images I had found and cut to size for the project, as well as the sounds I was using. More importantly, there were three arrays that were being called, which is how I managed the calling of the images for each square, and the verification of whether or not the piece was in the correct place (as seen in later pictures):

procst

It would then call a function where the main action was taking place. This was where the sorting of the images was happening (every time the ‘change image’ tag was applied), as well as the verification (each time the ‘place image’ tag was applied). Each time a player attempted to place an incorrect piece, a ‘beep’ would sound to indicate a false move; each time a piece was placed in the correct place, congratulatory bells would sound.

procdr

procfun

Hence, the beginning of the sketch looked like this:

skt1

The play in progress, something like this:

skt2

And when the piece was completed, this would show up on the final screen:

final

As might be evident, I was interested in the educational potential of a system like this. The digitization of a puzzle set made the whole thing a lot more flexible, as the kind of data that the physical actions invoked was malleable and could be changed to fit educational needs. Hence, with broadly minimal hardware, an educator would be able to easily reprogram a digital puzzle set to fit the educational or curricular needs of a class. As Antonius rightly pointed out during the presentation, it is also possible to make additional layers to this – for example, to make the placing of each piece a quiz, or a puzzle itself – which could further enhance the use of a system like this.

 

https://docs.google.com/a/nyu.edu/presentation/d/10UPyM-O8jSKomOWfYOQHDAADFfKAwwzGrj7RgNZjC6g/edit?usp=sharing

Week 8 Lab

(November 18th)

This week’s lab required us to work with Processing Video and OvenCV, using either face-detection, colour-detection, or Leap Motion. I decided to use colour detection, and continue the experiments with sound from the previous week. So, onto my Processing sketch, I uploaded a sound file – a song track from my laptop – and tried to program it such that every time Processing detected a particular colour on the screen (I worked with bright orange to keep it distinct) it would begin playing the song; every time the colour disappeared, the sound would cease.

What happened the first time I tried to run the sketch, of course, is that within three seconds, many dozens of the sound file were trying to play at once and my computer promptly crashed.

After restarting my system, I realized that there needed to be some kind of mechanism that allowed the sketch to realize and record the point at which the colour was detecting, rather than every moment the colour was detected. I attempted to solve this using variables and using the [file.stop()] function, but even on the second attempt, my computer ended up shutting down completely.

It was only a third time with a lot of help from Aven and a lot of poring over the logical structure of the sketch that we were able to make it work, with the help of two embedded loops. In the end, it had to be that we assigned a separate variable to recognize that colour was being detected, and another to note that it was not.

Aven told me that sound files can be very tricky to work with because they can do things like this and place a lot of strain on the system – which was certainly my key learning from today.

Week 5 Lab

(October 27th)

This week’s lab was a little difficult, both in terms of the work and documentation. The requirements themselves were quite straightforward: we were to make Arduino and Processing communicate with one another, and use sensory input to affect a Processing sketch. I teamed up with Jose, from Antonius’ class, and we decided to try and use a moisture sensor to change the [fill(color)] of a rectangle in Processing. After a bit of a rocky start, we got serial communication running (with help from Jiwon), and set up a code to fill a rectangle and change the colour based on input from our moisture sensor. However, first we got very quickly changing sets of values, which was difficult to use; we solved this problem by taking ranges rather than specific values as benchmarks. But even then, the sketch ran very erratically, sometimes seeming to work intermittently and sometimes not working at all. After several attempts to document the sketch kind-of-sort-of working, we finally settled for this video where it was not really working at all:

Week 6 Lab

(November 4th)

This week, we were tasked with working with stopper motors; the idea was to first use a given (3-D-printed) construct, attached to stopper motors, to move a pencil in semi-circles, and then to collaborate with another group, combine the two systems, and create a whole system (with two Arduinos) that would create full circles. I worked with Linda Yao, from Antonius’ section.

Antonius had talked to us in the preceding recitation about stopper motors, and how they are distinct from servos. We already had, therefore, a broad idea of how they work. The primary challenge was to work with the H-bridge, which we had dabbled with in the previous class (but with servos rather than stopper motors). We followed the given schematic, which was fairly straightforward – although it seemed like a lot of wires to manage.

2f137cf8c027b2f8dee7647266aff5c4

Once that was done, we managed to use the example codes in the Arduino library to make the system work, and then play with the code and values to manipulate it.

Following this, we teamed up with another group to create the full system with two Arduinos, which worked quite nicely. We also experimented with different speeds (and tried varying between the two boards) to create different shapes.

Finally, towards the end of class, when we were almost ready to disassemble, the Learning Assistant Nicholas Sanchez gave us a few tips about making the circuit a lot more efficient by cutting down on the wiring, and really thinking about what needs to go where. We started with this:

img_20161104_043616

And ended up with this:

img_20161104_045952

This was a particularly interesting class for me because I hadn’t considered what using two different Arduino boards for a single system would be like, and how that might be implemented.

Interaction Lab: Final Essay and Presentation Notes

Presentation Notes: https://docs.google.com/a/nyu.edu/document/d/10qTbJtLJC7TvRRtn8aXVxxaGYM7lD_duQsPtZB3Y20Y/edit?usp=sharing

Interaction as Stimulation

An ‘interaction’ in its most basic form refers to action, or an exchange of actions, that occurs bilaterally – which is to say between two parties. Hence, certain activities may be thought of as being mostly or wholly passive – watching a film, for example1 – whereas others might be thought of as mostly or wholly in the form of interaction – for instance, a game of chess with another person. Certain activities may lie in a gray zone – a theatre performance or a concert, for example, where performers might be actively responding to the actions of their audience. However, with most computer-based experiences, it seems much more difficult to imagine a non-interaction than an interaction; which is to say, almost anything one can do with a computer seems to be at least a very basic kind of interaction2. A simple program, for example, which allows one to search a directory, involves input (search terms) and output (results). In a general way of speaking – which I am now going to take to be the more important way, for this essay – this is not what would be considered a particularly interactive exchange.

Why would it be that a search-and-retrieve program might not immediately be considered interactive, but a Kinnect game usually would? It would seem to be the case, here, that the quality of being interactive has not so much to do with interaction and non-interaction, but interactivity: the point, it seems, is usually to use certain tools to boost the extent to which each party is active towards the other.3 This inter-active framework is thus usually good for broad frameworks where inter-action already exists: objects that were not interactive might be made so, but usually by placing them into a context that already involves inter-action. For instance, one popular way of incorporating objects into the realm of interaction seems to be to involved them in a game-interface. This is one example of a context where inter-action is not just a matter of course, but fundamental to the activity: a user is often aware that she cannot play a game without interacting with a given interface – and when an unusual object is incorporated into this interface, this accentuates the realization that interaction is happening (as opposed to using conventional controllers and such, in which case the fact of interaction might be obscured, even when inter-action is evident).

It seems, then, that part of what is really important for interaction is not just the fact of inter-action, but the realization that this is taking place: this, it often seems, makes (at least the human) parties more active in the exchange. It is easy to see why this would be desirable in a number of contexts. In education, for instance: the point of an interactive educational interface might be seen to be to increase the extent to which a student is active – not simply towards the interface, of course, but through the interface. The aim here might then said to be to stimulate learning through interactive systems. This is broadly the framework within which I place my final project proposal. The ‘interactive’ bit of an Interactive Puzzle Set is not simply that one interacts with a puzzle set in the technical sense: this necessarily happens anyhow with any puzzle set. The idea is that an interactive interface can heighten activity – and the basic framework I am thinking of in a more developed version of the interface is educational. Each digital puzzle piece is something that a user can ‘interact’ with: if it becomes possible to embed an educational puzzle into each act of ‘placing’ a piece on the set, then the set becomes a powerful educational tool. To give a very basic example, if, in order to place a digital puzzle piece on the board, one is required to identify a historical date, then the puzzle set also becomes a history lesson. The hope, then, would be that by adding the extra layer of inter-action between the user and the puzzle set itself, the game as a whole becomes more interactive.

1This ignores the kind of quasi-Hegelian viewpoint that might suggest a film that, as it is being watched, acquires the quality of being a watched film; from which point of view the film, by the end of its watching, is no long the film it was before. Here, a particular kind of interaction might be conceived, where the film is acting upon on the audience but the audience is also acting upon the film. This issue may be interesting to particular questions, but not to this essay. Furthermore, the much broader interaction of a film being critiqued by an audience is also set aside for now; the micro-exchange between the film and the spectator is what is significant to this example.

2Left out here is the consideration of non-interactive programs simply displayed to other people, as this seems to be something to do with the social setting than with the program in itself.

3It is worth noting that this does not really help answer the question of ‘what is the meaning of interactive’; my point is that in this context (of, for instance, IMA), it seems to be a less important question – at least as far as the other requirements of this essay go.

Maker Carnival photos

Proof of attendance at Maker Carnival. Seeing some of the stuff people were coming up with was greatly inspiring; particularly liked how some people were integrating older or antique material into projects with new tech.

img_20161016_042456 img_20161016_043640 img_20161016_044636 img_20161016_045701 img_20161016_062002

Also – ^ thumb drives. Very good.