“Dining in Shanghai” is a tabletop projection mapping installation project featuring the content about local urban culture and the concept of “family” in China. The project is designed to provide users a dining experience where they can enjoy the Braised Pork Belly, a classical Shanghainese dish along with a story of a local family about the dish through the application of media and technology.
“Dining in Shanghai” elaborates dining experience with multi-sensory designs in sound, sight, smell, and taste. Users first watch a short animation that illustrates the family’s connection with the dish from three generation’s perspective. In the following part of gastronomy documentation about cooking with a traditional recipe, the vivid sonic environment of cooking Braised Pork Belly in the kitchen and the smell of the dish, which is triggered by heating, helps users to increase appetite before enjoying the signature home dish. Through the eating process, the user behavior data is collected to generate a customized output for individual diner with the different Chinese calligraphy of “食”.
The project aims to introduce food histories and family culture in China, but more importantly, it is able to remind users of their personal experience about food and family throughout the storytelling in the dining experience. The audience in different age groups are able to position themselves in the storyline where they have the strongest emotional resonance on real-life details and experiences, for example, the scene of kids playing games in lanes and the in the Chinese fine line animation.
The project is an experiment of using the psychologic effect on users as another element of the multi-sensory design, which involves users’ emotional interactions that help the diners look into the humanities and their urban consciousness behind media and technology. As the society steps into a digital age, this project would become an innovative example of integrating digital technology with humanities to make the precious oral history and urban development progresses accessible for people to explore or look back at in the future.
For the second round of user test, I got real food to test the logistics and the project part (video and partially audio). My users are Weiyu, Tiffany and Alex, CS major students from CO2018, who also participated in the first round user test.
The process of this user test includes: 1. sit down and trigger the sensor (ultrasonic and touch sensor) 2. watch the animation and cooking video 3. manually put the heating pad into water as soon as the video part goes into cooking process.
We had a discussion about:
1. length of the video: The animation part seems to be OK if the narratives could be done soon. The length of cooking process seems to be too long. Especially when the food is there, it is hard for people to concentrate on the video.
2. sound of the video: my users like the sound of cooking, but they suggest to fix one little detail – make the sound louder when I pour the water out.
3. projection: the material of the plate is fine – “the little reflection works perfectly”. However, the edge of the plate is curved too much for projection. Leon suggested me to project in the middle area where it’s flat. A flat plate will maximize my use of the surface.
4. user interaction:
1) dining experience: The feedback I received in class is that I will be serving the meat when the video of putting food into bowl is playing. In the user test, we make the dining experience with no instructions, and it shows as expected that user has no idea about where to start, whether they can use the plate and how much they can eat. I think serving might be a better solution.
2) about the generative part: after asking users, I got the answer that they prefer the chopsticks to be wireless (like real dining). Therefore, I think magnetic switch was the most possible solution. Then in Tuesday’s class, I got suggestions from Michael that I could play with people’s seat by using pressure sensor to detect their movements. I think the installation ways much easier but I’m still thinking about the algorithm and how it synthesize with the content of food history.
The heating pad works. The only change that I need to do is to replace the black plate with a bamboo case (with a container inside) that allows steam to go up and surround the food since right now the food is only heated on one side. The 4-5min heating is enough for the food. In addition, bamboo case will conceal the smell during the video part, which might be a solution for reduce distraction.
Here is my midterm work-in-progress report. I’ve been working on the animation production and hardware test in the past week.
Animation Here is the demo of the animation with the Chinese style line painting. The size is 1080*1080 is because in the end the projection is on the plate, which is a circle. The elements for the first story have finished. The begining, with the script of introduction to Shanghai, is a skyline stroke by stroke stop motion. The production of the animation is now at the first story, life in lane. The next scenes to make is dinner time at grandma’s place, the dish, and second story of dad refused to eat the dish. In the demo, I haven’t adjusted the speed of each clip and do transitions because the script still needs revision on details (especially the description of food in English).
I’ve done configuration on Yun, which I will be using for the final version. For now, the circuit test is still with Uno, on which I uses ultrasonic and touch sensor to trigger the video. Through the hardware test, these two sensors and program logic works well in Arduino and Processing. The rest of the triggers are similar to these two so that the serial communication part should not be a big problem. Since I will use four separate headphones for users, the headphones will be plugged into Serial MP3 players separately. So the sound trigger function will be done in arduino, and others through Processing.
The table and plates have been ordered, and will hopefully arrive early next week. The process has been longer than expectation because when doing the projector test, I realize that those tables with shiny paint will relect the light so that users will be concertrated light somewhere in their vision, which is definitely uncomfortable. Finally, I found a vendor selling wood-only The design of heating the dish while presenting is updated by using “heating pads,” that will extract heat when it’s put into water. In such case, what I need to do with is put water and heating pads together in the container (the bowl with dish inside will be placed above the water container). The mechanism of holding the heating pad is controlled by servos.
**** ABOUT USER INTERACTION ****
I’ve been thinking about the question for a while. I think it would be interesting if I could represent user behavior with the elements to generate the visualization part at the end of the dining experience. By caculating how long and how much users eat the pork belly, hopefully I can map the data into different categories (how people behave in 1960s, 1970s and etc.), which is pretty similar to the user report of food deliver app Eleme. The visualization could be either by “food ticket” in different time period with different amount, or, present a special drawing that represents different period. I will be thinking about this question and listen to feedbacks and maybe come up with a better idea.
What to do next:
The major production work is on animation, which has started but a little bit behind schedule. Animation production and video of cooking process should be finished by the end of next week. And the week after next week will focus on post-production.
The hardware part is to check whether softserial can work with serial communication together. If not, the serial mp3 players will use another arduino board (the original plan was everything on one board.)
Look into structuring the table (especially where to make holes for wires and place for sensors.)
Title: Botanical Adventure – Plants sing and we hear them.
In the concept presentation session, I received feedback of building in more physical interaction in the interface. Therefore, I bring in plants and watercolor paintings into the interface. By using the rice paper, I created a watercolor garden with another layer of dry paper covering it. When I water the dry paper, it will stick to the next layer of painting and show the patterns. In addition, I bring in real plants to represent different instrument, like an orchestra. The lego in the middle is designed to represent a person moving around to hear the singing of different plants, based on the location. With the interface design, I decided to use moisture sensor, touch sensor and joystick as hardware part.
The data processing was designed to be finished in Arduino, including distance calculation. However, when dealing with the serial communication, I found it’s easier to to the raw processing in Arduino (digital signal of on/off and Joystick location) and distance calculation in max (though looks complicated). The distance is linear mapped with volume. Sean showed me a max package spat that has advanced algorithm for spatial sound. It’s very expensive but definitely good for advanced spatial sound effect.
At the concert:
Please refer to the live video of IMA Facebook.
Lessons and Possible Improvements:
The presentation at the show could be improved for audience’s experience, especially the patterns on the rice paper, which is not easy for them to see. Leon has advised for laying out the plants over the stage to have real “tour” performance for this project. I see this is as a possible solution if I use Yun and RFID for the hardware part. But I’m not sure about the communication from Arduino Yun and Max.
In the previous weeks, I was working on the script of animation in Shanghainese, Mandarin and English and the production of animation sketches. This user test process focus on three parts:
1. user’s understanding on the content (the script): Are people interested in the topic? How long the storytelling should be?
2. physical interaction: Can people understand when to press the projected button (pressure/touch sensor)? Do they know when they can eat?
3. logistics, as mentioned in previous one-on-one meetings: What if people mess up the food and how? How are people going to eat the food (along with talking or looking at the projection content)?
The user test is based on the combination of storyboard of animation, instructions (to be projected) presented on the paper and real food. Since the project could have four people participating simultaneously, I invited three users to do the user test first in person and then in group.
1. user’s understanding on the content
For this topic, feedbacks vary. In general, users are more interested in the story of our generation, which is the beginning of the animation. With the story moving forward, there appeared more questions/confusion/loss of interest in the visual part. The feedbacks are:
a. The story of young generation affects people emotionally, especially the warmth of life in traditional lanes and details of life.
b. The transition between the three generations’ stories should be improved. The storyline need some revision on the order of scenes.
c. Should the story be more personalized about my family or should it be more general. This is a debate among my users. There’s a tradeoff between timing and individualized story (since I need more illustrations about the background and clarify the connection with general food history.)
2. physical interaction The bias of this user test is that all of them are from young generation. For my envisioned audience, there’s a group of people who are elder generations who has experienced the great famine period in China. Therefore, I conducted a seperate mini test with my mom for the questions. With light instructions, young users were able to understand what to do during the test. However, with elder generations, more questions come such as “Is it a real button?” The general feedbacks are:
a. It’s possible to use simple physical interaction triggers like magnet on chopsticks to bring in more user experience.
b. The light instruction is clear to understand for young people. However, when doing the group test, I observed that users might influence others’ behavior.
c. While designing the circuits, make sure that users are eating on a dining table but not a lab table with dozens of wires.
3. logistics The logistics test is focused on real food eating experience. My users give very creative comments on the logistics.
a. In the user test, we observed that the sauce drop on table while users picking up the braised pork belly and put it on the plate. It’s better if the food the during the show could be drier (but will it impact the original flavor? asked by another user) or make the sauce sticker.
b. It’s a good point to bring in triggers on chopsticks. But the placement of the trigger through the timeline should be considered. In the end, users just placed chopsticks over the food container/plates, not giving them back to the chopstick rack.
c. Whether users will communicate or look at the visualizations really depends. It’s hard to forecast their behavior in this procedure.
Therefore, I’m going to:
1. Revise the storyline and fix some details in the animation design (like the scene that clock rings when it’s dinner, the pointer should be at a dinner time.)
2. Start to think about the construction of the table. I’ve been testing FSR and touch sensor in the past week. FSR is better and it could be replaced by a self-made sensor (resistance or conductivity). Also, in my NIME project, I use the similar sequencing in arduino coding, so the basic logic line is out there.
3. Do more user behavior observation for the dish. I need more samples.
Inspiration and Music Composition Different from the original plan, I started with the composition other than thinking about the interface. The rhythm comes randomly, “Flute” part came into my mind when I was on a bus. And gradually I added the harmony, the chords and the beats. The image a.k.a.“story” when my friends had in their mind after listening to the whole piece was 8-beat video game music of a forest adventure story. Therefore, I wanted to make my interface more “game-like”.
Interaction Design Arduino with Joystick and Touch Sensor? This idea comes from the game controller given the music theme of “game.” I hope to make the 3D sound effect with joystick as the character is walking around in the forest and trigger the track using different buttons.
Performance It’s about practice. I need to make sure that the tracks are on beats. *Because of the chords; otherwise it could be funky.
Demo of the composition
Feedback Maybe use some physical interaction triggers (e.g. rocks?). Need to think about game controller for a performance regarding motions, story interpretation and etc.
This assignment uses Serial communication between arduino and max. I got touch sensors to work with the patch. In arduino, if we open the serial port, we can see the sensor value or messages sent out. If we connect the arduino port to Max, the value then could be processed through Max. Therefore, physical sensors could work as an instrument.
I used touch sensor, which provides digital output 0 and 1, similar to button, but with higher accuracy and faster processing speed. Also, I have two inputs from two touch sensors that could control 1) pitch (with some randomness in music instrument) 2) sample.
Here is a demo of this assignment, little bit messy with no music composition…
In last week’s class, we learnt how to use Max for MIDI, sample and synthesis. For the assignment, I create the instrument that uses these functions to play music. The music piece I worked on is Ode to Joy.
I use keyboard to control the pitch, for which I created two parts – single pitch and chords. Also, I played with different music instrument to change the quality of the sound, using if statement to check which keys that I’ve pressed on so that it matches with the specific music instrument. In addition, I played with mouse positions to control add-on effects for the music and remixing with presets in Max.
This is a body controlled kinect music instrument performance by the artist Ross Flight, who is a “sound designer, producer, engineer and interactive technologist.” For this project, he uses Microsoft Kinect. Later on, he did more experiments with MAX and Ableton Live as an integration.
For this week, we soldered the piezo during class time and was supposed to use that for the music instrument amplifier.
I first made a prototype with a paper can. The problem is that paper is not strong enough so that the volume is very low. Also, because fiber will eliminate the vibration, the amplifier is not utilized.
Then I tried the aluminum can by cutting the similar slices as I did with the paper prototype. Though I was not able to adjust the pitches to the standardized ones, it worked somehow by presenting diverse sounds/pitches. Thanks to Han Su who offered me great suggestions during the production process.
Concept: In natural world, all the creatures are formed in a smart skeleton and influenced by their own force and the force in natural environment. The beauty of the natural world is that those forces could combine together and form a balance, with very impressive natural beauty – for example, the fish tail. Thus, I would like to recreate the fish tail in a digital way to express my appreciation and transplant it into the beauty of art.
Inspiration: My midterm work on the skeleton inspires me to explore more about the skeleton and the forces inside a creature’s body. I still use spring for the main structure. However, the midterm fluid feeling is “fake”, by the end of the week when we learnt flow field in class, I wanted to combine the two elements together, and make the fishtail more natural.
Progress: 1. Figure out complex force system where everything is on the same page and influence each other The natural tail’s force include the leading force of skeleton, the force of muscle and the force from the outside world. When I try to combine these forces together in my script, it turned out to be very hard to find the balance point: with unadjusted force and parameter, it’s easy that the structure will go super crazy, like this:
After I figuring out the balance of the flow field and the spring force of the tail, it works in this way:
I really like in this version that the tail waves as if it has life and interacts with water. However, the two points that struggles me is that a) it looks like many fish as a group swimming together b) when it goes to flow field center, they will go crazy again.
Also, in this version, I work more on the spring force and length as details, so that the structure will form a tail shape naturally and has different value passed into constructor for spring length – which is all about maths… Also, it’s a general formula that we can create as many level as we want with this structure (if p5 can hold…)
Then I added the 2D noise to the background so that it looks like watercolor water.
2. Exploration with another skeleton: After a long time struggling with the previous tail structure, and thanks to Prudence, who heads me to another example, which is a skeleton that use spring but without flow field, which you can find here: processingjs.org This also reminds of the other skeleton that I did previously but purely with physics attraction. Therefore, it comes to my mind that physics attraction could be muscle and the spring could be bone. However, it is not the real skeleton that fish usually has.
Here is the demo of the new skeleton:
The physics attraction part has the thing designed that will present the later part of the tail with lower movements (actually it works in this way: it pass the value of the movement change into applyForce, with smaller change and the particle position, the force with even be smaller):
particles.get(a).applyAttraction(particles.get(a-1), a, moveDis);
In the final presentation, there’s some bug with the tail because I found that in Processing the push() pop() of the arrayList is a little different from the p5 one so I need to set the position in the other way. By moving the code from p5js to processing with java, the capacity has increased a lot.
3. More visuals:
During the final presentation, I got the feedback that the tail should be larger. And thanks to Professor Moon for keep giving me suggestions and feedback on the visual part. The adjustment that I made at the beginning is to add the follow path for the fish shape. And due to the limitation of the physics factor of the path, I need to separate the path in to two lines (for which I used the code mostly from Dan Shiffman’s sample code for building the path). Then Professor Moon said that since the tail is my project’s focus, I could make the particle visual as a shape of tail to emphasize it. In the final version, I make the tail generation part without flow field first, and then can be triggered by the mouse click, so that the particle tail could be a little fluid.
When several different forces come in together, it will be more complex to control since the adjustment of 0.001 could break the balance. In this situation, the GUI mode could be helpful to figure out the right solution
Physics with digital recreation is really amazing. I enjoyed it a lot!
During the show I received various feedback. I think it would be helpful if I could make this into something related to educational material that try to help kids to understand the natural life phenomenon.
If I have more time, I will explore more on the creature skeleton and development, and try to apply it with different scene and have fun with it, maybe in OF since it’s more powerful!! Finally, I would like to thank Professor Moon, Jiwon, Jack and all my classmates for your great help and feedback during the whole process!