Final Project: Adaptive Utensil

First of all, I’d really love to express my thanks to all the guest speakers that we had in this class and professor Marianne Petit who led me to this awesome class and shared many helpful resources with us. I personally love this class really much because I’ve been thinking of designing an assistive product for disabled people since I once saw patients in rehabilitation center doing their best to get well. One of my family members suffered from cerebral hemorrhage two years ago and there were some sequelae. Having spent months in rehabilitation center, I knew many patients were eager to get completely well, or at least, could live their own lives without the help of others. Therefore, that’s what I think assistive technology is supposed to help with. Throughout this 7-week course, from my understanding, what I learned about assistive tech is that it is designed and developed to assist people with disablities in communicaton, education, work, recration and daily living tasks, and in essence, help people with disabilities to be independent and enhance the quality of their life.

During visiting CereCare, we met some children having problems holding utensils or stationery steadily, and since Jeffrey has been into topics of food, we decided to design a product (prototype) for those children at CereCare who have the ability to hold things but could not hold it firmly to grab things easily. 

Project Name: Adaptive Utensil

Name: Zihe Quintus Wang

Partner: Jeffrey Kung, Zeerak Fayiz

Project Description: Adaptive Utensil is an assistive device designed for kids at CereCare who are suffering from cerebral palsy to hold utensils easily and firmly. With a universal squeezable handle, there are various interchangeable heads which allow children to switch between different utensils like fork and spoon.


Drill that has interchangeable heads

Squeezable balls that provide a comfortable feeling and solid connection with hands


I went back to CereCare to observe children’s behavior when eating and found some children were not our target users – children who are capable of eating by themselves; children who are not able to hold utensils and need to be fed by teachers. I definitely wanted to think about a way and another assistive technology for the latter group of children, however, at the current status, we could only focus on our target children.

We divided the creation of our project into three broad steps: sketches,1st iteration of prototype, and 2nd iteration of prototype.


In the beginning, we had a bunch of ideas and it was really hard to figure out where and how to begin. So Jeffrey recommended that we draw out different ideas to get things started. While drawing, we thought of like 5 different prototypes with different mechanisms that would allow the switching out of utensil heads. However, what we came to realize soon after was that many of the ideas we had in mind would be too advanced for a first or second iteration of prototype and that we should start with basic materials first.

1st iteration of prototype:

For the first version, we used a hard paper type roll for the base, followed by tape, a metal fork, foam, and a bottle cap.

The prototype was very easy to put together, however, a lot of concerns arose once it was complete. Firstly, the fork was non-interchangeable. Secondly, the foam had no protective layer on top of it, making it completely unwashable. So for our next prototype, we went ahead to address those issues.

2nd iteration of prototype + materials

The second iteration of our prototype was a little harder to put together, but it was more sturdy and complete as compared to the first version.


  • PVC pipe – it behaves as a base in the handle which provides a solid structure so users could hold it, also, in case of the PVC burr hurt children, we also used sandpaper to sand the edge so it was pretty smooth;
  • Foam – this is the second layer of the handle which is over PVC pipe and it provides a fluffy feeling when users hold it in hands and squeeze; since foam is easily shaped and could fit into different hand shapes, the other usage is to prevent from falling down from users’ hands;
  • Plastic utensils – considering food safety and protecting children from hurting themselves, we’ve decided to use plastic utensils head instead of metal ones;
  • Cardboard – we noticed that utensil heads would be super unstable without something against it so we put a round shape cardboard in the middle pipe and it would prevent utensil heads being pushed into the pipe;
  • Magnets – since the utensil heads are interchangeable, we’ve thought of many ways to let the shifting process be as easy as it could, and then we realized that magnets could be a really good way of connecting things and taking apart them;
  • Metal strips – since our utensil heads are plastic, we had to add metal strips on it to make it be able to attach magnets; we also thought of food safety, so we attached the metal strips at the very end of the utensil heads to avoid children touching these strips;
  • Mod Podge and plastic bags – we’ve already got two layers – PVC pipe and foam, and then, to make the head waterproof, we had to cover it with some waterproof material. Our initial intention was to use silicone (could be got from swimming hat) to cover the handle, however, later on, we decided to use plastic bags and mod podge to cover the handle and it could have the same effects, and one of the advantages was that children could also draw on mod podge to create their own style handles;
  • Tape & Glue – sure, these were for gluing stuff

Here is the link to our slide which includes a demo video of demonstration. 



From the perspective of the projects, I think my collaborators and me are satisfied with what we have made because it could really help and solve problems instead of ones that just show off technology without concept and rationale behind. To be honest, I learned many things during the process of making this project, including but not limited to, hearing feedbacks and making improvements and various iterations, exploring many possibilities of different material and telling what is the difference and which could best serve our purpose. And in terms of collaboration, it was really good time discussing with my groupmates and seeing different possible outcomes and shifting our ideas based on situations.

Final Documentation for Hyperbolic Orchestra

It was a super awesome performance at Power Station of Art! I really loved all performance led by my classmates which gave me many new ideas and concpets of “orchestra”. And also many thanks to professor Clute and professor Chen who decided to mix these two classes and we had an opportunity to combine audio and video arts together to generate these amazing outcomes.

I collaborated with Shiny Wu and Cindy Yudi Jia from Video Art class working on our project Distorted. Distorted is an audiovisual performance showing distorted personalities through a combination of videos and sounds in an installation form. 

Same person with different personalities is a common scene in daily lives. Even with people who have no dissociative identity disorder, they may behave differently in different situations when facing different people/occasions, which we called as “impression management.” We want to show that sometimes these “managements” can be effective for good, and eventually becomes uncontrollable.

To achieve this, our plan for this performance was to:

  • have a performer moving the mirror frames which stand at the center of the stage
    1. the turning of the mirror frames represent the fluidity and unpredictability of constantly changing personalities;
    2. by pushing the frames to turn and crossing them, the performer will exhibit that it is not at his/her will to control the speed and the changing of personalities
  • videos will projected on the big screen
    1. the distortion between the performer(real-time reality) and the distorted images on the big screen showing different personalities
  • some other visual effects will be projected on frames (due to stage limit and considering the integrity of performance, this idea was dropped)
  • audio will be processed with sound effects like delay and reverb
  • videos, visuasl effects and audios will change along to the performer’s movement

Having made plan, we started to build up project. We built installation and developed content at the same time. We spent a lot of time discussing the structure and we tried many times constructing and we failed many times because of material choice and budget-limit.

this is our initial idea of structure

Then at last, we thought of Ann’s suggestion of umbrella structure and found it would work better than others. So we built a base using spinning plate and PVC pipes so the whole structure could spin. And for the head part, we used cardboard, wood board, shiny paper and fabric to built frames, or let’s say, curtains, which served the same purpose.


This was what the installation look like finally. And it was super fragile so we thought about many ways to transport and store it safely and we decided to assemble it at PSA and just leave it there.

And about the content, we filmed video footage and found many footage online and put them together in different layers in grids to indicate different personalities. It showed how the protagonist went through several status of white-collar, scared girl, desperate woman, stubborn old man and helpless dog.

For the audio, I used Max/MSP to build a player that could play bunch of audio files at the same time and change the speed and even reverse it. I got 12 different sound pieces and many of them were created/recorded in Logic Pro by ourselves. Also, to make the sound fit into the context, I also mixed it with delay and reverbs. These sound, generally are theme songs for each personality and transition, so audience will easily notice there’s a transition between personalities and what specific personality it is by perceiving different audios and video clips and movements performance with installation.

To be honest, I was really impressed by all the performance at PSA and they were totally distinctive. I was rarely into video art and never realized video cotent could be rearranged to illustrate a completely different experience. And I think it was really cool to let live performer and pre-recorded video content get involved with each other because one is in 2 dimension and the other is in 3 dimension, and not to metion the intensive interactions and subtle connections between them. The light was also a very interesting factor. When Demi’s group was performing, I was absolutely attracted by their video content and the live light. Angle was processing audio when the highlight was her and it perfectly strengthen the connection between her and the video content.

Was there any unexpected problem during the performance? Yes there are! The one happened to our group was the light. Since we didn’t have a mark on the floor so we didn’t realize the installation would not be under the highlight, so when the light was on, I had to move the installtion to the center of the light and tried to let it be part of my perforance to not let audience notice that was an accident. Then there was also an accident of the timing for light. The light was on much earlier than when it was supposed to be on, and fortunately, we made some slight modifications to the performance immediately on video and audio and also my movements, and it seemed more natural than I thought so it was just fine 哈哈.

So here is the video on vimeo that shows our performance at Power Station of Art Museum in Shanghai at May 6 and please feel free to let us know what you think.

(Contents above partially credit to my collaborators Shiny Wu and Cindy Yudi Jia.)


Everyday Use of Tech Charts+ Accessibility Settings

This post was supposed to be posted in March, however, I didn’t we should post all the content here on documentation blog, so this is the make up for the previous post.

Everyday Use of Tech Chart Part 1


Everyday Use of Tech Chart Part 2

Accessibility Settings

To be honest, I tried using accessibility settings many times before because I wanted to test how smart and convenient the technology could be and also excuse my laziness. But this time, I was asked to use accessibility settings to finish a task as an assignment, so I chose one that I never used before — accessibility keyboard on MacOS X. Just as what Marianne tried in the class, I used the same accessible functions. Maybe it was the first time I used an accessible onscreen keyboard, I found it was a bit hard to use:

  1. There is a bar moving from very left to very right through the whole screen, so if you miss a chance to select the wanted area, you have to wait until next time it goes there which is kind of timewasting and inconvenient;
  2. The mode that let a bar sweeps the whole screen is, indeed, super considerate so there won’t be any area missed, however, maybe it was because the speed of the bar movement was kind of fast, I felt a little nervous and panic;
  3. There are many different layers in this function, so you have to be completely aware of the logic and structure;
  4. Since there are different requirements of click and double-click, you have to go back to choose it again if you want to switch between these two options

Nevertheless, generally speaking, I still think it is the most helpful accessible onscreen keyboard that I have met so far because there is no doubt that it could finish all your desired tasks. And unfortunately, though I put forward many shortcomings, I still could not think of a better way to achieve this.


Field Trip Response

(This is the make up for the response to the field trip and huge apology for this late submission)

I was not aware of where we were going to and what we should observe there before we arrived at CereCare Center. To be honest, when I realized CereCare Center was in a residential community, I was a little shocked because I thought such rehabilitation center should look like a hospital which has its own separate building. (After watching the documentary, I knew that this building was the founder Mrs.Lyu’s personal property and she donated the whole building to the to provide the health care to the children patients with cerebral palsy.)

When I walked into the building, the first thing I noticed was a stairlift. Frankly speaking, that was the first time I saw this and I then realized that I never thought about patients’ problem of climbing stairs. My father suffered a sudden cerebral hemorrhage about one and a half year ago and I had been in the rehabilitation center in the hospital for months taking care of him, and when my father could get off the bed and start to walk, we first used a wheelchair to help him when he felt tired and one month later we took the wheelchair away and let my father walk without any external help. From my mother’s point of view, she didn’t want my father to use any assistive device since she believed that the more my father uses these devices, the more dependent my father is on them which is not good for his recovery. There absolutely was some time when my father got exhausted even after one minute’s walk when walking and climbing stairs, and we insisted avoiding using any assistive device and just let my father take a seat to get some rest instead. So in short, from my experience, I didn’t expect to see climbing the stairs as a real problem. But at that time, I realized that for some of the patients who really cannot walk or who don’t wish to push hard on themselves on training walking, the stairlift was completely necessary.

There were many medical apparatus and instruments in the classrooms which I was quite familiar with since I had seen some patients (include my father) used them when I was at the rehabilitation center and I knew how hard and painful for them to use these instruments to get physical training. Though this, it might be because I’ve seen such scenes for many times, to be honest, I didn’t pity any of the children, and on the contrary, I viewed them just as children who are in kindergarten learning things. However, though I tried my best to view the place as a kindergarten, I have to admit that there were obvious differences between CereCare and the real kindergarten:

1) financial problems:

a) outcome: different from most of the kindergartens that have one teacher responsible for many children, here in CereCare, one kid should have one to two teachers because most of the kids don’t have the ability to take care of themselves or even conduct some daily actions. Though CereCare tried their best to save money on paying nurses (which the nurses agree with), daily expenses and space rent, the outcome is still much beyond the income.

b) income: as most of the normal kindergartens, they can admit as many children as they want, and consequently, they can make sure that the income can afford the daily operations; however, due to the space limit and lack of the professional nurses, CereCare is not able to provide services to many children at the same time. According to the conversation with the manager, CereCare used to have to two floors and 32 children but now just one-floor space which is on the second floor and 20 children, and the tuition for one month for each kid is ¥6800 and they also provide financial aid for some of the children. Thanks to the sponsors and fundraising events, they got some financial help regularly and could kind of release the burden.

2) human resources: the staff in CereCare are divided into two different groups: life nurses and therapists. To make sure the children could get the professional training, the therapists should participate some professional training regularly in HongKong and teach the life nurses when back. And as the current situation in China, physical therapists (PT) are not many, not to mention speech therapists (ST) and occupational therapists(OT). And compared to the hospitals or other governmental medical institutions, the therapists would more like to go there instead of this NGO. Therefore, it is harder for CereCare to hire some professional therapists than the kindergartens hire professional teachers.

Another point is that, I remembered David (if I am right) asked a question about how the children dress themselves every day and how long does it take. I really appreciate this bringing-up which is one of my biggest concerns about my father. Though my father can almost live a normal life now, he still can’t do things smoothly. For example, it takes more minutes for him to dress than we do; it is a bit hard for him to use chopsticks/spoon to pick up dish accurately especially when he always closes his eyes when lowering his head. So I’ve been thinking about the adaptive clothing. However, on the other hand, just as what CereCare stated –  it is important for the children to learn how to put on normal clothing themselves because that is what they will be exposed to the outside world in the future. I can’t agree with this anymore. What my mother and I hope to see is my father can live a completely normal life someday without any assistive help, or at least, others will not notice my father’s using assistive technology. So my point of view is that when designing adaptive clothing or any other assistive technology, we should take both practicability and protecting users’ from being treated differently into consideration.

Assignment 1+2: Responses to Readings and Videos

For this week’s assignment, we were asked to post response to few TED talks and assigned readings. Some of the notions from the readings and videos are similar so I would like to put them together and talk about it.

In both I’m not your inspiration, thank you very much by Stella Young and Paralympics Least Favorite Word: Inspiration by Ben Shpigel, an idea was put forward that the word “inspiration” is overused on the disabled people. Admittedly, this word radiates a positive attitude with hope and expectation to the future, however, just as what Stella said entertainingly, many of the inspirational slogans/images/commercials are just inspiration porn. I can’t agree with this anymore. There are tons of such inspirational words spreading on our social media network and waiting to be liked and shared, however, they don’t make any sense. 经常性发这样内容的博主,我们习惯称他们为“励志婊”. When making such speech, it automatically puts the disabled people in an exceptional group — why disabled people are one’s inspiration just because they can live a normal life like you and I do? One can definitely be others’ inspiration when he/she overcomes huge obstacles and makes big progress, that is to say, what is really inspirational is one’s endurance and the ability of troubleshooting. We should never have a low expectation of anyone especially when he/she is not physically “same” as us otherwise it could be an invisible form of discrimination.

One of the interesting fact mentioned in both resources is that many of the disabled people don’t actually think they were disabled since there’s nothing different and difficult for them to live a life and disabilities don’t disqualify them from anything. In the matter of fact, as what Stella articulates, they are disabled by the society that they live in than by their bodies and their diagnoses.

I also want to talk about Aimme Mullins’ TED talk together with All Technology Is Assistive by Sara Hendren, and also the speech made by Gento Kondo on last Saturday’s class. The story Aimee shared with the audience that one of her friend being jealous of her “variable” height actually made me laugh and think: could we say the disabilities, to some extent, are another form of empowerment? This might sound incredible, but it was observed in Aimee’s workshop with 300 children that these children, when not influenced by adults to behave themselves, would like to express their curiosity and interest in viewing the disabled people in a more advanced way — from disabled to the one who is potential to be super-abled. Therefore, from the perspective of providing them more possibilities, various designs of assistive technology should be taken into consideration. In Saturday’s class, I really appreciate Mr.Konto’s speech and it made me think of a question related to product design. When professor Petit wondered how long did it take for the old gentleman to get used to the Exii arm and given the answer “about 1 minute”, I was surprisingly shocked. Then I began to question, in product designing, how should the designer evaluate and define the threshold for the user to be trained and instructed before wearing/operating the assistive devices? And in an accessible range, how to minimize the threshold?

Another design rule that I am more interested in is the adaptive technology. In the past few decades, it has been a problem that the assistive technology is quite expensive and they seems to be only for those who can afford and love to pay. But I can totally understand the producers because the assistive devices are not massively produced so it’s extremely hard to keep profitable when only making thousands of products at economical prices. This is even harder when these assistive devices are for bodywear because it has to be customized for each and every user. Therefore, I am thinking, if there’s a way for the designer to maximize the usability scale of a product in terms of users’ different body size, body shape, diagnose, etc.

When I read and watched New York has a great subway system if you’re not in a wheelchair, I found it was extremely similar experience with mine in New York. Actually the problem the author pointed out happened to me and my friends before and we complained about this but didn’t think of a solution. Subway was the only transportation that I took in New York during that semester besides one uber experience, though the environment was kind of messy and dirty and noisy, I had to admit that it was super convenient even when compared to those that I experienced before in DC, Tokyo, Osaka, Dubai and most of the Chinese cities since it’s all around the city and operates day and night. However, without any doubt, it won “the worst getting-into-metro-station experience” award in my mind. Each stair step was really narrow and got slippery on rainy days which was greatly dangerous. My dorm was on the Second Street and I barely found any elevator in the surrounding metro stations, but this was not a problem to me and I even didn’t notice it until the day that I moved out and went to a hotel around time square. I had two big suitcases and each weighed 80lb. When I pulled the two suitcases to the metro station entrance, I found there was no elevator. I had to carry one downstairs to the underground leaving the other on out of my sight. Fortunately, a gentleman helped me bring the other one down. Can you imagine two people climbing narrow-step stairs with two heavy suitcases in a dog day? So the other day, I chose to order an Uber. And just as shown in the video, can you imagine someone using a wheelchair being trapped in the metro station or even could not find a way to enter the metro station?

Compared to the subway in NY, the subway/ in Osaka and Tokyo are much more disabled-friendly. They provide ramp escalators in almost every metro station which is really considerate and thoughtful. I’m for (ramp) escalators instead of elevators since they get less chance to get an outage, and even if get a breakdown, it requires less time to maintain. One other fact that I noticed on buses in Osaka was that when stopping at the station, the bus will automatically tilt a little to the sidewalk. I first thought that was a problem with the bus then I found all the buses had such function and I realized it was designed for the old to step on the platform easily. Such tiny modifications make a product and the corresponding experience more acceptable and welcomed which I think is the thing I need to take into account in the future when designing assistive products.


Capstone Progress Report

Capstone Midterm Review


After last week’s user-testing, I considered some feedbacks and advice carefully and integrated some of them into my project as improvements. One of the users suggested me to focus more on the connection between the dance movements and the music beats. For example, for the percussion part, it requires an instant hit, so the corresponding movement should be quick and accurate. I really appreciated this so I went to a dance professor at NYU Shanghai asking for some help in choreography. And thanks to professor Tao, I was recommended a student performer who is good at Uyghur dance performing and choreography. After talking and confirming with her, she was invited to be my motion-captured model.

Mo-cap System Testing

Luckily, I got the Perception Neuron(PN) kit in time. So in the past week, I devoted myself to testing these productive sensors. It was my first time using PN and since the whole kit contains 32 sensors and it requires the performer to wear all the body straps before putting the sensors in, the first experience took about four hours. We spent most of the time on wearing the sensors, debugging and sensor calibration.

(Many thanks to Yuping Zang, my performer, supporting this project and devoting her time to the testing)

The whole process of the testing is as below:

  • wear all the sensors: it was hard for just one person to do this, besides the performer, it had to have a second person to help wear the body straps and place the sensors in the sockets and adjust the sensors’ placements;
  • calibrate the sensors by acting the required postures: there were four different postures required for the calibration and the performer should do the whole set for multiple times throughout the whole process (I felt really sorry for having my performer doing this repeatedly and I really appreciate for her patience);
  • make some postures and movements to be captured by the sensors: the result of motion-capture was not really satisfactory at the beginning — some parts of the 3D character, like fingers and arms, severely drifted and presented in an absurd angle. Then we noticed that the sensors are magnet-sensitive which may have significant errors when put in a strong magnetic interferences environment then we moved to the dance studio and found the result was much better;
  • switch the connection mode from USB cable to wireless(WIFI): I was really surprised to see that this mocap system be able to transmit the data through WIFI so we set up a local network for the sensors and found it worked steadily as well;
  • perform the demo Uyghur dance and record all the movements: my partner performed the demo Uyghur dance with the music and all her movements were recorded by the sensors. Then I imported all the data to Axis Neuron to visualize these data. Here is the demo version:

(if the gif is not playing please play the video below)

The next step I’m going to do is to work with the performer and professor Tao together to set down the final dance. However, before we go to the point, I have to have the music part ready, or at least the demo version which can be used as the structure for the final one.

Data Processing

Having recorded the movements of the performer, I then tried to transmit the data to Max/MSP/Jitter, which is the primary software I would probably use for the rest of the project. Max is good at processing the signal data and audio and visual thing. Axis Neuron is able to broadcast the data(BVH file data and calculation data, etc) through TCP and UDP, and fortunately, Max can also use UDP to send and receive data. However, during the process, I found what the datatype Max is able to output is binary instead of string which is difficult for me to use and process.

Now I am thinking to use Wekinator, the software which uses machine learning to build interactions between different programs. (Many thanks to Prudence who introduced this software to me.)

For the next few days, I’ll be spending time on composing the music and looking into the data processing in Max and integrate these two part together.


#EverythingIsEternal#Data is volatile. Since the history of humankind, we’ve attempted to immortalize ourselves, from drawing on cave walls to our triumphs, to building tombs to memorize our loved ones that could take millennia to erode away. Through types of media, all the objects that we want to protect can be saved and kept for a very long time. Sometimes, even the objects themselves gradually fade away, the data of them are still perceived and kept for centuries. The data I’m referring to here can be in various types: written texts, paintings, drawing, audio recordings, video shooting, and even people’s memories. By this way, everything can last eternally.

#ThePowerOfDataCanBeInvisible#Apart from the physical forms of data recording, there are many others that exist in an invisible way, for example, people’s memories and the these data’s effects on other things. When we are talking about the data, we are actually taking their applications and effects. So when generating and spreading the data, it is always worth being repeatedly considered and digging out the idea behind and the possible influence they can have in the future, even though the effects don’t manifest right away sometimes. And any artists should always be responsible for the projects and their possible public influences.

#TheVariousShapeOfWordsHaveUnexpectedResults#Texts and words, as one category in art field, are really interesting forms of art. They use the common letters and rearrange the structure to produce unexpected results. Even the same letters sometimes can have totally different meanings and thus different influences, and not to mention the rearrangement. The use of words and texts deserve more attention and consideration especially when they are used to transmit specific feelings, emotions and attitudes.

Work Blog 1(Week of Mar. 5) – User Testing I

(This is the first blog that I’ve posted here though it should not be the first one so I’ll make up the previous work blogs later.)


The first round of user testing: test your project with at least 3 people from *outside* of your section this week, and ideally people who fit into your target audience. What went well? What didn’t? What surprised you? What are your immediate next steps for your project?

Ideas Update: 

Since the sensors for my project were not ready yet, what I actually did for this week was a user testing using low-tech and “fake” operations.

As my projecting developing, some ideas were added to my project and it was slightly different from the original idea:

  • sensor:

– Update: the previously expected sensor was Lidar which uses lasers and light to locate the location the target object or measure the distance. However, after I talked to professor Wiriadjaja, I was not recommended to use it since Lidar system can only scan the 2-dimensional space and the feedback is not as instant as I expect. Then I was advised to try Perception Neuron, the motion-capturing system. After some research, I found it would be super useful and fit into my project environment. What the most fortunate was that one of my friends in France sent me a video about an experimental project in France using Perception Neuron and Max/MSP to do the very similar thing that I am going to do.

– Current status: the sensors were still on its way to my hand 囧.

  • interactions:

– Update: I’ve been struggling with the form of interactions between the performer and the project for a long time. My idea was always to integrate the Uyghur dance into my music-based project, but due to the limitation of the sensor, I didn’t figure out the best way to achieve that. Fortunately and finally, since the sensor was decided, the specific interactions/movements were consequently set down: the performer can wear the motion-capturing sensors and perform the Uyghur dance freely and all the movements and even tiny gestures will be sensed and then matched to the real-time music.

– Current status: the specific movements will be discussed and set down later when I get suggestions and supervisions from a professional dancer performer.

  • performer:

– Update: I was a little afraid of the result of the performance since I might not be able to express the content that I wish to convey, especially for I have no dancing training and skills. But now, I’m contacting with the dancing professor at NYU Shanghai and hoping to have a trained dancer to perform the Uyghur dance and integrate his/her body movements and gestures into my project.

– Current status: being in the discussion with the dancing professor at NYU Shanghai.


– I invited one of my friends who is taking Uyghur dance class now to perform in front three audiences who are from business major, math major and biology major(there is no specific reason why these three majors, the invitations were sent completely randomly).

– I also prepared a demo music piece of the Uyghur Twelve Muqam to play together with the performer’s dance.

– Layers of generative visuals were also prepared.


First of all, I found the visual part was extremely distracting audiences’ attention to the performer. But I think this was partially because the visual content was not related enough to the Uyghur music. Actually, the visuals contents were not ready yet but I just wanted to make an experiment to see what people will more focus on when the two sections are both provided, is the physical performance or the digital visuals.

Apart from that, I found the audiences could definitely tell what the performer presented was an Uyghur dance because the classical movements and gesture were really familiar to them. Therefore, even without the music, they could still tell the content. However, when I asked them if it is better to play the music during the dancing performance or not, they all said they wanted the music. “Obviously I can tell it is Uyghur dance even without music since we all watched it before on TV shows and even in real life and that is really famous. However, how can a dance be there without music? It is impossible and incomplete.” said one of the audiences, “and I found the music plays a really important part in the dance because the performer danced to the beats and rhythm. It obviously augmented our visual and listening experience.” Another audience said that:”I love this performance and I even can’t stop dancing with the performer since the whole performance is really dynamic especially when the music is accompanying. You know, Uyghur people are always energetic and make you really want to dance with them though you might not even know how to dance. After this experience, I think I will probably get to know more the Uyghur culture.”

Having these feedbacks, I was really excited for seeing this project’s realizability — it is able to provoke people’s interest in Uyghur music and dance and this is a good sign to let them get to know more about Uyghur culture and care about Uyghur culture preservation in the future.

For the next steps, I’ll redefine the visual part trying to see more possibilities in it, however, as what I was always reminded, the music part is undoubtedly the main part of my project, so the primary action I should conduct is to harmonize the relationship between performer’s actions and the music. I’ll keep discussing with the dancing professor and find the best and most suitable dancing movements for this project. In the meantime, if possible, I’ll keep looking into the visual part.


Assignment 5: Generative Image Processing

We learned generative image processing techniques from last week’s classes. Then I wrote a Processing sketch to pixelate a recreation of paintings of Vincent Van Gogh with the inspiration of pointillism.

Here is the short demo of the project:

There are generally two modes in this project to present the image. In the first mode, as shown in the first gif, the whole recreated painting Starry Night and Scream is imaged by plenty of separate particles and these particles can move in a subtle range. However, in the second mode, as shown in the second gif, though the image is still made of particles, every time when the particles move, they will leave traces so the window will be filled with colored pixels without any black space left. Actually, the second mode is based on the mode 1, but just to not allow the background to refresh.

To pause the particles’ movement, the user just needs to press the spacebar.

Besides, with the mouse cursor’s movement from top left to bottom right, in both mode 1 and 2, the image will “explode” into particles and fly to the user. When spacebar is pressed, it is easier to see the particles in 3D perspective.