Capstone update

I initially planned to develop my project using Unreal because it is considered a more ‘powerful’ engine. However because the most important part of my project is being able to render images as volume (as opposed to textured polygons), I’m considering switching to Unity.

There’s a plugin on github that allows me to render MRIs so I can build on this.

There is also a plugin on the asset store that would enable me to edit things in a more efficient way, so I have asked Prof. Danyang Yu for funding from the Biology dpt and this is pending approval. In the meantime, I started building on the Adam demo, which has a similar internal environment and mood. I played a while with the settings and I built my own textures and wrapped them around the tunnels.

Because Matlab can also be used to render volumetric images, I am also trying to compile that with a plugin (either for unreal or unity).

I am using a plugin that was initially built for Unreal and should allow me to do ICP. This could be interesting because it could have many applications, including face recognition which could allow me to have the user see their own face in the game. The problem is that the source is not compatible for U4.18 (which is the current version), so I am trying to recompile it with python (!).  ../_images/pipeline.png


Because I don’t want to get too sidetracked with this, I will stop if this doesn’t work by the end of the week and continue working with the github plugin and Unity.

Capstone Project Progress

So far…

1. Sensors  

At this point, I have tried several sensors and several different ways of placing the sensors. I elaborated this in greater details in last week’s post here . I’m still yet to try the MindWave headset that Leon suggested. This will definitely be one of my top priorities.

2. 3D Models 

I actually began working on the 3D models of the flowers much earlier in the process. However, Prof Naimark suggested that I focused on the sensor part of the project first. This advice turned out to be very valuable as I did run into several challenges whilst figuring out sensors. The challenges ranged from not getting the data I wanted to not actually having the device in my possession.

This week, though, after getting quite a meaningful progress on the sensor part, I plan on resuming my work with the 3D models. Below is one of the models of I would like to continue working on.

3. Connecting Arduino to Unity 

To connect Arduino to Unity, I initially did it without any Unity plugin, as demonstrated on this website. I tried using a potentiometer on the Arduino side and a preset 3D world example on Unity. Whilst this worked just fine, I did run into several errors that, I have to say, made me very nervous. In response to this, I figured I’d give Uniduino, the Unity plugin, a try.

Setting up the Arduino and Unity using this plugin turned out to be much easier. There are some points I had to constantly remind myself of, i.e. using StandardFirmata library on Arduino.

In testing out the plugin, I went through several stages. They were:

1. Testing out using the built-in LED

2. Testing out using a potentiometer to rotate a 3D cube in Unity

3. Using a potentiometer to change the colour of the 3D cube in Unity

I didn’t run into any meaningful problems during the first two stages, but I couldn’t quite figure out how to make the colour change to be gradual in the third stage. I suspect I need to make the variable change smaller, but when I tried that, I still couldn’t quite create a gradual change of colour (fading in) of the cube. This is something I would like to further explore.

Looking back at my schedule…

It seems that I’m quite on time. This week, as scheduled, I will have finished testing all the sensors I want. An exception to this would be the 3D models that I haven’t quite finished because, as Prof Naimark has anticipated, sensor testing would take a while.


Capstone: Midterm Update – TA


The part of the project with the most progress so far has been the sensor device.  I’ve attached several sensors to track time and heart rate and display it back to the user, and more sensors could be added or swapped to customize the experience.  Once the task is firmly decided, the final sensor device can be assembled and packaged nicely (or integrated into the space suit).

Speaking of the space suit, there hasn’t been as much progress here.  The suit part will consist of just an upperbody covering, a backpack for “oxygen”, and likely some gloves as well.  The helmet, however, is a very tricky matter.  All helmets on Taobao are either made for kids costumes or are extremely expensive (over $100 US).  I may look into alternatives to a traditional space helmet, such as making a hazmat-type helmet that includes just a flat view piece and a flexible material around the rest of the head.

Finally, a room has been secured for this installation, so I can begin to decorate it to create the desired atmosphere.  I will be moving in to this room to start working within the next couple weeks.


Most of the original ideas are still present from project proposal, but there are new ideas that have been added to the project.  The one major change has been the task which the user performs.  The original plan for a task which should be completed quickly but without too much rapid movement does not achieve the desired effect of the experience (based on user testing, which is summarized here).  Now a new task is required.  One possibility is having the entire experience be an effort to find a solution to a particular issue expected on a real Europa mission, and would require some sort of skill based effort (rather than a time based effort; imagine the space equivalent of changing a car tire).  One great advantage of this kind of task is that it gives an explanation for why the user is not really on Europa, because the simulation becomes part of the immersive experience itself.

Additionally, the sensor device may be integrated into the space suit rather than being an external handheld device as originally intended.  This would not be too difficult to achieve, and seems more practical for a space mission than having to hold a device in your hand all the time.  I may also add a communication system such as a walkie-talkie, but this is far from being realized.


The immediate tasks that need to be accomplished are to get a suit together (particularly the helmet) and come up with a good skill-based task.  After the suit I have a suit, I can look into integrating the sensors and other electronics into the suit.  Then I can focus on the world-building, decoration, and story of the experience.  At this point, I expect the physical and narrative elements of the story to require roughly the same ammount of time to finish, although the physical parts will be prioritized first.  Of course there will be come progress made on both sides throughout the coming weeks, but it is likely that more time will be devoted to the physical elements first and then the narrative elements later on.

midterm updates

Updates on level one:

I had two demos of the first level this week. I created the house and a maze to let the plot go more smooth.

This is the first demo, when I build the house, level one for the soldier to departure. This demo tested the lighting, physics, and the shooting option.

This is demo two, I planted trees in the yard, opened two windows on the roof, and created a door. Generally speaking, I improved its appearance.

The structure of the level:

I created a house surrounded by a forest. The forest will make a maze and on the way to the final destination, several events will be triggered. I haven’t created the events yet, but I have the structure and the outline of the map.

The square in the upper east is the house. The little dots are the trees, the user will finally get to his destination in the south. There is a blank space, I call it shooting range, this is where the most of the plot will happen. Still working on the cinematography.

This is my stage on the perspective. The house is white right now, I’m planning to make it look more real. I will probably install some tutorials from Internet, but considering the difference in shape and size, I might have to 3D model everything on my own.

What I’m looking for:

I’m looking for feedbacks mostly on the interactions, the world setting and the playable assessment. I don’t expect my users can know much about the historical context, but it would be nice if they can recognize which battle is.



  1. To my surprise, the three people I consulted all recognized which battle it is. Their emotion are generating neutral, not too patriotic or feeling unrelated. Three of them have all showed their respect to the history with manners.
  2. The house should not look like an American house, It would be better if you can make a traditional Chinese house, From my understanding, the Chinese houses where usually located together in a village. It will be nice if you can make a village, cause What you have now really it looks like an American farm. In addition to that, I suggest you to paint the floor green and probably put on some grass. Right now it’s all gray, Or you can use some tutorials from the Internet that makes trees look like actual trees.
  3. I really liked it, Your game reminds me of Playerunknown’s Battlegrounds. I play this game every day. I’d like to see how your users can pick up weapons and switch them. It would add so much value if your game involves the modern games’ functions.
  4. I don’t know if it’s possible, I suggest you to have a CG, like a small video you can watch before playing it. it will illustrate the historical background better.



On historical context:

  1. I read five scholarly paper this week, adjusting my thoughts on the background. The three out of five emphasized the importance of how long it took. It is a beginning of the Chinese army to actually battle against Japanese after months of escaping without firing. Japanese invaded China from the northeast and by the time the northeast of China was ruled by General Zhang Xueliang. He did not use his army to defend his homeland, And withdrew his troops all the way inside the Great Wall. The Great Wall was defended by a different troop led by a different War Lord, he gave out orders to hold the line, and his soldiers did obey. The significance of this battle, is that it is the first time in the conflict between China and Japan, China had taken it seriously and responded with military power(not escaping as usual).
  2. Followed by the last sentence of last point, because it is the first time China being serious, many people were motivated, encouraged to defend their homeland. One paper points out that it is of vital importance in the process of forming the Chinese identity. Japanese created this otherness, and changed how Chinese people view the world. They used to view the China as 天下,which means under the heaven. Those Chinese ideology of China and the world is more like a China centered cosmopolitanism. The borders of the states are not important, what matters most is the common lifestyle and the recognition of certain manners and virtues. The Japanese invasion trigger to the Chinese resistance, it explained what nationstate means to Chinese people, and they begin to adopt it.
  3. One of them especially appointed out what one general did. He’s a Christian believer and he baptized his soldiers with a hydrant before battle. There are other ceremonial events recorded in this paper, such as chop the head of a pig, and use its blood to sink the flag, kill a Japanese merchant in the town to rise up the mood to battle. I thought the reliability of this paper, because the source the author used was from the newspaper run by the warlord by the time. The source could be both propaganda and fake. But it shows one thing clear that Chinese generals tried to get away the fear in their soldiers’ minds, and the low education rate really troubled them.
  4. Two incidents in the battle inspired the Chinese soldier and the nation. The first one is about a battalion ambushed a Japanese battalion and they won the war. It is very rare for the Chinese troops to win a battle in the beginning, and this means a lot to the Chinese people. The other incident was also published in newspapers, that a few Chinese military companies captured a Japanese tank. By the time tank was invented no more than 20 years, and the fear for tanks exists everywhere. Defeating attack and capturing it encouraged Chinese soldiers to fight with bravery.
  5. I also read about how leadership conspiracies went on. Different warlords are divided on whether to fight against Japan or not. This is not very relevant to what I want to express, because I want to focus more on the ordinary soldiers’ life rather than a big conspiracy, but it might still keep this part as a background information in case I needed it.

The papers I read:

Xu, Xiaomin. Zhang Xueliang and the Battle of the Great Wall. Japan Studies Press(China). 1991.

Yang, Li. Recalling the Battle of the Great Wall. The Socialist Academy of Hubei Province.

Hou, Jie&Chang, Chunbo. The Collective Memory and Historical Identity of the Battle of the Great Wall. Academic Journal of Zhongzhou. 2015.

Jin, Yilin. On the Battle of the Great Wall. Unite Journal(《团结》)2005(4).

Ming, Zhu&Liu, Chunying. The Negotiation Between Kuomintang Government and Japan. Journal of Changchun Teachers’ College. 2002. Sept

Midterm Update on MYoice

As I have mentioned in the capstone proposal, MYoice includes three major stages: idea testing stage, user testing stage, and final design/execution stage. The project has gone through the idea testing stage, and at the mid of user testing. I will explain in details.

1.Idea Testing: During the idea testing stage, which has already been done in the past two to three weeks, I invited people both inside IMA and outside IMA to talk to a simple recorder that could record the story and playback. I wished to see if users like to or feel intimate to talk to themselves (which is the core concept of MYoice). The result is encouraging that people do like talking to themselves, and some of them even request me to send their recorded file to them. Nevertheless, some users thought it would be helpful to have more guideline for using the storytelling machine, and some of them questions the context given in the first prototype (which only asked users to talk to themselves). The result of this stage is encouraging but it also reminded me to look for some readings or researches related to clinical psychology, and also conduct testing to see what context or interaction would help users feel more natural to talk to themselves.

2.User Testing: I have already start a second round of user testing, which invited more people from outside NYU Shanghai to use MYoice and reflect on their feelings after using the machine. I will continue this user testing to find out the best context and interactions for this week(3.19-3.25) and next week(3.26-3.31). At the same time, I am completing the core technology part(estimated to be finished by the end of 3.25) that would includes a self self-database, sharing function and QR codes generation. In addition, I would also show users different design of MYoice installation to see which one they would prefer(before the Spring Break). This stage is estimated to be done before 4.8.

3.Design/Execution: The actual construction of the installation and the storytelling machine would start at the end of Spring Break at around 4.8-4.15. I estimate two weeks needed for the construction of one installation and another one for a complete one week.

[SUCKERS] Capstone Update

Due to the length of my blog post last week, think this update will be relatively short while also attempting to summarize the work that has been done for [ S U C K E R S ] up to this point.

While I had initially hoped that I would be able to begin working on my capstone last semester, and I found the classes that I completed last semester helpful in terms of preparation, I’m really starting to come to terms with the fact that the work I did last semester was exactly that: preparation. What I mean by that is, while most of the work that I created then will not make it into the final project in the forms that they took when they were created, I found the various processes that I went through in Interactive Documentaries, UX Design, and Programatic Design Systems–not to mention Intermediate Creative Writing–extremely helpful in various ways.

  • For interactive docs, though I may not use the documentary channels created for the final installation, I was able to not only research real life occurrences and media depictions of Vampires, working in the documentary form and participating in critiques honed my eye for a project that has now developed into a docu-fiction series. It also prepared me well for the actual process of shooting and editing.
  • Though the wireframes and interactive prototypes I created in UX Design aren’t necessarily applicable in that I won’t be creating an app as I originally intended, I learned valuable lessons about User Testing and was given the opportunity to think of the actual real-world interaction that users will have with my project.
  • Similarly, the generative posters I created in PDS may not make it into my final capstone project; however, as a design course, I found the general principles that we learned in class to be extremely helpful guidelines for creating my poster series, particularly when it comes to the lessons we had that encouraged us to reproduce particular artistic styles or works of art.

Production wise, I had to push back shooting for a number of unexpected reasons from my originally planned starting date; however, one of the main reasons that I chose to create an episodic series was to ensure that the experience could be created piece meal (i.e. if I only have half of the originally planned time I will be able to create half of the planned episodes at 100% quality rather than if I had planned for one “feature length film” and would now only be able to create one half quality experience). That said, the camera is finally here and with the capstone budget approved (meaning I can purchase props, visual assets, and necessary plugins), I am now only at the mercy of my actors’ schedules. This week I am meeting with all the main characters that have agreed to be in the experience to go over their scripts* with each of them and shoot them with both 360 and pancake cameras to create posters and other promotional materials.

Storywise, the three confirmed and one stretch story that I’ve finished scripting and would like to include in the final capstone are as follows:

  • a Vampires Anonymous meeting focused on a human being without the vampire condition that is addicted to drinking human blood nonetheless
    • will look at addiction, allyship, and support networks
  • the first date between a vampire lady and a human woman that has never dated a vampire before
    • will explore sexuality and intimacy. won’t attempt to be too emotional, will play more at the awkwardness of first dates and will keep a running joke of the vampire being decades older than the human.
  • a black vampire that more directly addresses issues of intersectionality and identity within the context of Shanghai
    • loosely follows in the footsteps of “shes gotta have it,” looking first at others to define the main character before examining and interviewing her directly
  • a 富二代 “vampire-hunter” that becomes infamous on the Chinese internet after using his parents’ money to support anti-vampire hate and purge groups around the world
    • will also explore the intricacies of native chinese vs. laowai “power dynamics” in Shanghai while also helping to ground the world of SUCKERS into the real world.

Live Texture➠Midterm Report

Following the capstone instruction, I adopt Gantt Chart which is really helpful in time and project management. Here I attach the latest version of my Gantt Chart
Design, Production and Prototyping
My capstone is compose of data visualization, machine knitting, and exhibition.
The user testing in the last 10 days concludes that the heat map or the stacked bar chart perform better at displaying the worldwide births and deaths than the stacked line chart. Users could obtain more information from the numbers and more sentimental feelings from the black-and-white contrast in these two data visualization.
In the Week of Mar 18, the focus of production moves from data visualization to machine knitting, as the machine had been delivered. The physical production could boil down to 3 stages: understanding how the machine works, hacking it with the open source, and automatization.
The machine Brother KH-930 was delivered with two thick books, one of which is an operation manual and the other is a pattern book. The good thing is that I scheduled enough time to understand the machine by tackling it. The learning curve takes as much time as I expected.
I successfully assembled the machine, following the manual. Nonetheless, I encountered a glitch on Mar 14, which took me a whole day to solve. When I tested out the plain knitting function of the machine, I stepped forward to turn on the machine to knit the built-in patterns that, in contrast to plain knitting, requires electronic control nor power. However, it kept blinking the number 888 on the screen. The manual does not list a solution to such problem. I googled and found an e-manual of another version on According to the e-manual the blinking signals a CPU error. The manual also provides a way to fix it with a warning that this step clears all the memories. It halted me because the manual does not point out clearly what kind of memories would be cleared. Inspecific, the memories of the 500 built-in patterns, or that of the user input patterns? As I could not risk losing the built-in patterns, I went to another round of searching with different key words. Finally, somewhere in a old forum said that the step does not have an effect on the built-in patterns. The problem was solved without risk, after rounds and rounds of googling. I narrated the encountering in this report because through this glitch-solving process, I realized that there would be much difficulty to gather the information of such a vintage machine whose production has been discontinued since 30 years ago. In contrast to most of nowadays documentation that are searchable, the e-manual is a scan version of a physical copy and the solution from a user’s answer written many years ago in an old forum.
Apart from tackling the machine, I also read a book The Knitting Technology in order to get familiar with the analog form — knitted fabrics. The book, published in 1980s, covers many related topics including knitting terminology and knitting machine, for which it is worth reading and conducive to my learning about knitting.
The Knitting Patterns
The machine supports a dozen of knitting stitches and up to 500 built-in patterns. For each type of stitch, it requires a set of configuration steps — to operate the buttons and rotary knobs to the right position on the carriage. Sometimes it needs two carriages to work together. It matters to me to understand not only what a type of stitch pattern looks like, but also what operation the machine requires for such type of knitting.
The goal is to explore the possibility of integrating the texture of knitted fabrics into data visualization, which entitles the output object a third dimensional of attributes. For example. the concave or convex surface could represent a dimension of data. Here I list in turn the stitches I have experimented. Tuck Stitch requires some needles to skip the knitting loop for a few rows, so that the tense of the yarns on the skipped needles causes the loops nearby to tuck together. Skip Stitch also requires some needles to skip, whereas the output pattern is composed of holes. Lace, Thread Lace and Fine Lace patterns look similar and are always remixed. It especially entangles one thinner yarn as the pattern and a thicker one as background so that the thickness of the yarns sharpens the contrast of the color. I have not tried the Fair Isle Pattern. According to the description it is simply a two-colored pattern without difference in stitches or texture. I would further experiment the Weaving Pattern and Intarsia Pattern.
At this stage, it is for sure that the output fabric would take black and white as the data visualization marks. In my data visualization design, the third color is mainly used as a background and border to be distinguished from black and white. Nonetheless, the project no longer considers to use more than 2 colors because it requires manual operation to change to a third color. Instead, the project seeks to implement a knitted texture so that viewers can tell the pattern from the background according to the texture. For example, the pattern is knitted to be convex and the background concave. The next step would be to figure out the right pattern, and hack the machine to take the user input images, while I am waiting for PCB board electronic components to be delivered.
The open source, AYAB, controls the knitting machine digitally through Arduino and an originally invented Arduino shield. It provides the schematics and the bill of materials to produce the shield. I was not worried at all about find a factory to produce the shield, aka a PCB board. However, when I sent the package that I downloaded from Github to a factory, they rejected with the reason that the files are not well organized. What is more, after I placed the order from another factory, it printed the wrong version of the Arduino. It turned out that I was confused about the PCB files and therefore sent the wrong version myself. It occurred to me that even I probably would not need to design a PCB board for this project, I should at least know the basic of PCB in order to communicate with the factory. Here I express my appreciation to Rudy and Nicholas who answered me a lot of questions with their expertise and patience. I ended up ordering again and got the PCB board of the right version
Even in the world’s factory, I failed to find some counterparts of the electronic components that are required in the bill of materials. Fortunately, one Taobao vender provides oversea purchasing of the electronic components. Therefore, the purchasing costed more time than I scheduled.

Capstone: Midterm Process

Week1-Week3: Research Proposal

Beginning from last semester, I started to think about what I want to do in the capstone. At that time, I only had a rough idea that I wanted to tell a story about my parents and me. I had several discussions with Sara about my ideas. Following are the storyboard I made last semester.

For the first three weeks, I was working on my research proposal. I found related articles and projects and used them as my references. It helped know better what I want to do and what the final effect of the project is. You can see my research proposal here.

Week4 – Week6: Storyboard & Sketching

From week 4, I started to make the storyboard and sketching them in AI and PS. It was more time consuming than I expected because I hesitated a long time whether to use AI or PS and I spent much time on finding a proper brush for my pictures. I tried to build a brush in AI and drew some samples. The effect of the picture was not bad. However, I started to think how to animate it. If I animate in AE, then the movement of the body would look awkward. Therefore, I tried to draw in PS. However, PS didn’t have a brush which has the effect I want. I searched different websites and downloaded several brush packs, but I just couldn’t find a proper one. Luckily enough, I found a tutorial and made my brush in PS, which imitated the effect of a pencil.

After I finished my sketching on the paper, I sketched them in AI and drew the finalized pictures in PS.

Week7: User Test-1

As the thick paper which I wanted arrived, I made the paper album prototype. It was in the actual size of the final version. I did my first user test. You can see more detailed description in the documentation last week.

Week8: Animation Making & Circuit Building

My initial idea was that I will start the technique part after I finish the animation. However, since I make the animation in PS frame by frame, it was really time-consuming. Therefore, I decided to do technique and animation part simultaneously.

I did the animation part in PS frame by frame. However, the origin videos rendered from PS were too fast. Most of them were merely 1 second long. To make them viewable, I changed the speed of the clips from 100% to 50% or 30%  in PR.

Here are several finished clips. They will be shown as the audience first turn to a certain page.

Meanwhile, I had a discussion with Leon and Nicolas about how to recognize page turning. Nicolas suggested that I can try the input-pullout. I bought some thin magnets and made a mini prototype. Later I will paint conductive ink on the place which has the magnet beneath. Therefore, when two magnets touch, they act as a button pressed. The only button is not pressed indicates that this page is open. I am going to finish the prototype this week.

Capstone Midterm Progress (Roopa)

Proposal and Overview: 

With the forming of the proposal, I was clearer with the specifications of my project and my priorities. I also found out the parts that I know that I need to do user testing heavily on. With the nature of my project, I think the best approach is to have a simple prototype/skeleton of the project to get the technology working first, and then set up&polish&user test the physical set up while I polish/complicate the software part.

Project Timeline:

Link to schedule:–jPyo/edit?usp=sharing

Schedule wise, I’m basically on time for what I have planned. I have done the software part. The basic technology, ie image manipulation and mapping tobii data to the image are all working. There are still some small stitches I need to fix and some parts I need to do more user testing on(whether or not to do individual calibration). But there’s enough for me to start planning out my physical setup, which I think is really important to this project and is not something that I’m necessarily good at. I haven’t done much on the physical part yet (because I scheduled a week out for other midterms but scheduled it for the wrong week) but this will be my focus this week. After I finish a full prototype, I will just need to complicate and polish them, which is my main task for April.


I’m not really good at visual but it’s a really important part of my project. I already have some working for the prototype but I don’t think that’s what it’ll look at in the end and I’ll need to keep working on it. To record my inspiration and have more ideas, I created this journal on both technical and visual effect of potential variations.

Link to my  journal:

User Testing: 

I have done user testing with four users without any formal knowledge of my project from different background and with different physical condition(height, glasses or no). The two main things that I learned from is actually what I need to do more user testing on 1) calibration or no and 2)which painting(s). But in general I have a basic idea of what it’d be like. I have written down a report of the user testing. And I have also done videos of each individual user for future reference.

Link to user testing journal:


Capstone Midterm Progress, Ariel (Naimark)

Here is my midterm work-in-progress report. I’ve been working on the animation production and hardware test in the past week.

Here is the demo of the animation with the Chinese style line painting. The size is 1080*1080 is because in the end the projection is on the plate, which is a circle. The elements for the first story have finished. The begining, with the script of introduction to Shanghai, is a skyline stroke by stroke stop motion. The production of the animation is now at the first story, life in lane. The next scenes to make is dinner time at grandma’s place, the dish, and second story of dad refused to eat the dish. In the demo, I haven’t adjusted the speed of each clip and do transitions because the script still needs revision on details (especially the description of food in English).


I’ve done configuration on Yun, which I will be using for the final version. For now, the circuit test is still with Uno, on which I uses ultrasonic and touch sensor to trigger the video. Through the hardware test, these two sensors and program logic works well in Arduino and Processing. The rest of the triggers are similar to these two so that the serial communication part should not be a big problem. Since I will use four separate headphones for users, the headphones will be plugged into Serial MP3 players separately. So the sound trigger function will be done in arduino, and others through Processing.


The table and plates have been ordered, and will hopefully arrive early next week. The process has been longer than expectation because when doing the projector test, I realize that those tables with shiny paint will relect the light so that users will be concertrated light somewhere in their vision, which is definitely uncomfortable. Finally, I found a vendor selling wood-only The design of heating the dish while presenting is updated by using “heating pads,” that will extract heat when it’s put into water. In such case, what I need to do with is put water and heating pads together in the container (the bowl with dish inside will be placed above the water container). The mechanism of holding the heating pad is controlled by servos.

I’ve been thinking about the question for a while. I think it would be interesting if I could represent user behavior with the elements to generate the visualization part at the end of the dining experience. By caculating how long and how much users eat the pork belly, hopefully I can map the data into different categories (how people behave in 1960s, 1970s and etc.), which is pretty similar to the user report of food deliver app Eleme. The visualization could be either by “food ticket” in different time period with different amount, or, present a special drawing that represents different period. I will be thinking about this question and listen to feedbacks and maybe come up with a better idea.

What to do next:

  1. The major production work is on animation, which has started but a little bit behind schedule. Animation production and video of cooking process should be finished by the end of next week. And the week after next week will focus on post-production.
  2. The hardware part is to check whether softserial can work with serial communication together. If not, the serial mp3 players will use another arduino board (the original plan was everything on one board.)
  3. Look into structuring the table (especially where to make holes for wires and place for sensors.)
  4. Projector and MadMapper.