Kinetic Interfaces: A Medical Application of ZeroUI (Ellen Yang)

According to my understanding, “ZeroUI” is a concept that allows the computer to interact with the human body in a multi-dimensional way. That is to say, any information conveyed by our body (including sound, movements, vision and so on) is able to be accepted, processed and responded by the computer. In order to realize that, I think at least an information-receiver, a recognition system and a are necessities for a ZeroUI computer.

This concept is helpful when the interaction needs to involve more than hands and a touchscreen. In fact, what comes to my mind in the first place is the device that helps Stephen Hawking to communicate with the outside world. According to SwiftKey team (one of the producers of that device), Professor Hawking was “using a small sensor which is activated by a muscle in his cheek”. Detailed information can be found here:

Stephen Hawking’s new communication system revealed

Clearly, this device includes a sensor, which has been illustrated by SwiftKey team. Also, a strong database that records Professor Hawking’s word-using habit so that it can predict what he is going to say next.

The SwiftKey team have specialized the device for Professor Hawking, and it goes so successfully that the huge medical value of a similar innovation has been proved. There are many people who suffer from being unable to communicate with others conveniently due to physical disability, but with a “ZeroUI” device they may express themselves more efficiently.

Although this device is a great innovation, I still believe there remains much to improve. Anyway, Professor Hawking is still using the sensor to “type” out what he wants to say, but is it possible to enable different forms of expression? I believe this will be a good direction to expand this helpful innovation into the art field.

4 new predictions (Ellen)

Extension to “Microsoft Layout”

This prediction is an extension of “Microsoft Layout”. According to the introduction of it, Microsoft Layout is a VR-based preset of floor building that allows people to set the items before setting them in reality to find out what should be fixed. My prediction contains some specific details. First of all, this layout should have the function to set the whole plane before the items, for example, how large exactly should the plane be? From this perspective, I suggest a square-based plane set. The whole plane is divided by squares that share the same size. For example, one plane is a 100*100 size square. What’s more, the squares are not fixed. They are editable, because if the builders would like to adjust the fineness of the plane, they can change the squares from 100*100 10 square meters squares to 1000*1000 1 square meter squares. Then secondly, the items should be set according to the squares of the plane. For example, a printer can occupy a 2*2 place in the plane. To realize this, the layout should collect as much information of different items as possible to let the builders use correct data to build the plane. Last but not least, Microsoft Layout can also be applied to architecture industry.

Facebook VR live experience

This prediction is based on the Oculus Research on Facebook reality lab. My prediction consists of 1. camera for live shooting 2. VR headset for user experience. First, when the Facebook user starts a live video, he or she is wearing a VR headset that contains a camera that is 360 shooting the things around him/her. Through Facebook, the audience can see whatever the camera is shooting, they can even see the things that are behind the user.

VR game market gets rapidly growth

Actually this is a fact that is already happening. My prediction is more than that. I think that several hitting games like Oasis in Ready Player One or Sword Art Online will become a really popular game throughout the mainstream society. At that time, a VR headset should be as popular as smartphones, because the games are based on a large number of users or gamers. As more and more people are getting used to VR headsets and VR games, the VR game market will be a new popular industry that attracts many investors and manufacturers. Game platforms such as steam will earn more money and more games will be produced at that time.

AR museum

A museum that consists of AR items will appear in 4 years! This museum contains nothing except for AR equipment. As VR games get more and more popular, the AR museum will become a place that allows people to experience the VR world in the actual reality. Except for presenting VR game world, this museum will also be used to present the latest VR/AR technologies that are not yet complete to present the actual thing. AR robot will be the guide in this museum.

Final Project (Lu & Syed) Ellen & Jaime

The working link:

http://imanas.shanghai.nyu.edu/~xy812/final%20project/index.html

Description:

The website is designed for stress-releasing. Using html, css and javascript, our website provides stress-releasing audios along with some interesting interactions. In our website, the user is able to choose between three paradises (underwater, rainforest, and beach) and enjoy the relevant relaxing audios. Each paradise provides at least two different audios, and the volume control sliders are on the top right corner. By using the sliders, the users can control the volumes of the audios and enjoy the volume they set according to their personal reference. Also, each page has a “watch your worries disappear!” comment box, which allows the user to type in their worries and watch them disappear in the center of the page. For the underwater page, we add a bubble interaction. There will be a bunch of bubbles pump up from the bottom of the page, and when the user’s mouse moves over the bubbles, the bubbles will escape the mouse. For the rainforest page, we created a raining animation that represents the rain in the rainforest. For the beach page, we have a seagull png picture that moves according to the user’s mouse movement, which represents the seagulls flying around on the beach. The cite page is on the right bottom corner of the main page with the “i” image.

Process:

I and Jaime worked together to get our websites done. I worked on the underwater and the beach paradises, while Jaime worked on the rainforest and the home page (the main page).

Challenges and Learnings:

We have many problems in coding. There are the major problems:

  1. The bubble interaction. At first, we wanted to create a wave animation in the beach site, and Jiwon taught us to use p5 function to create an cos() effect. However, the ultimate effect is so awful that we have to give up the idea. But the cos() code is useful to create the bubble effect, so we used that to create the bubble pumping up from the bottom animation. For the interaction, Jiwon helped us establish the function that updates the position of the bubbles.
  2. The seagull flying problem. At first, I found a sample code online that allows an image to change position according to the position of the mouse. But that code is too complicated and there is actually an easier way to create the effect: the lerp() function (Jiwon told us). When the starting and the ending position are set, the image can move according to the preset speed.
  3. The comment box interaction. We use css and javascript together to create the interaction. Jack taught us to use the onclick() function to create the fading in and fading out of the texts that are typed in by the users.

What might be changed:

  1. The names of the audios should be added.
  2. We want to let the background to move with the mouse move. Actually the code is already done but the image was in css so I can’t use the js function to change it. What a pity!

Week 8: Reading Response to “Mechanical Reproduction” (Lu & Syed) Ellen

Mechanical Reproduction has undergone huge process from the very beginning of humanity. The development of mechanical reproduction allows the widespread of knowledge among the mass amount of humans, thus changing the way of human learning and processing. In the area of art, the mechanical reproduction accelerates the circulation of famous artworks. However, these reproductions are not perfect — even though the appearance of the object is copied, the place or significance cannot be copied as the copies circulate. Nevertheless, these imperfect copies may create more value in a different circumstance.

I agree with these arguments. Whatever the artwork is, mechanical reproductions of them certainly help develop the arts, even though they might be incomplete. From my perspective, both the original version and the copied version have their value in different areas.

Week 10: Reading Response (Syed & Lu) Ellen

Reading Response to “Hackers and Painters”

The author states the similarities between hackers and painters and then claims that similar to painters, hackers are also artists instead of scientists. I agree with him partly, in that hackers are more creative and scientists are more original (So hackers start original and get good, and scientists start good, and get original). However, I don’t agree with the part that hackers are not scientists. I think hackers do need more than creativity: hackers need a more rigorous logic and need to learn the scientific method before they start to create. Different from painters, it is impossible for hackers to create something before they learn the specific coding language.

Reading Response to “Computers, Pencils, and Brushes”

This article argues that computers are more of a tool than a player. I agree with that. In the machinery time, everything favors the machinery way of production, including the area of design, and that’s why the computer skills are so important in modern time. However, just like pencils and brushes, computers are no more than a tool for creating things, and human beings are the true creators that use the tool and create amazing things. Computers cannot think for themselves, and their operation logic is created by humans.

Week 11: Reading Response to “A History of Internet Art” (Lu & Syed) Ellen

This article introduces the history of the net art in the last century, mainly from 1995-1997. Many net artists appeared at that time, programming great and various types of projects on the web that are in the form of websites. From the characteristics of the projects, we can say that the nature of net art is sharing, communicating and gregarious, relates tightly with self-expression of the communities.

Through the net art, people can communicate with each other without limitation. The freedom of expression of the websites allows the artists to speak and communicate with each other. Net art projects are really helpful when it comes to promoting feminism or the rights of other minority groups because the community nature of net art gather the minority groups together and provide a place for them to discuss and express themselves.

However, the openness of net art still triggers some problems. The article mentions that some net artists are afraid that the net art will be over-commercialized and be colonized by mainstream media so that the feature of minority community will fade out. I have experienced a similar situation when one of my favorite websites becomes really popular and the people engaged in later are not so concerned about the art itself. While witnessing something I love becoming popular is a happy experience, the following result might not be that good for the beginners.

Video Project (Lu & Syed) Ellen Ynag

Description:

This video project is aimed at presenting the problem of discrimination and stereotype. Our group (Ivan, me and Jaime Z) want to do a parallel comparison between a white girl and a colored girl in order to show how the seemingly same daily life is different to them. By showing this, our advice is to pay attention to your thoughts about minority groups: your thoughts and behavior might hurt them just because of the stereotype that you don’t even notice. Along with the two comparison videos we put many interviews aside. By letting people talk about their own experience, we want the user to be more engaged in the situation and understand that discrimination is not far away from us — it is happening really near.

Website Link:

http://imanas.shanghai.nyu.edu/~cic278/CommLab/Mitodo/video.html

Discussion of the Process:

At first, when we design the plot of the comparison video we had four scenes in total; but after we shot the videos we found that it is actually very difficult to shoot in the first-person perspective so we eliminate them to two scenes. Even though we cut the amount, the quality of the comparison video is still not very good. On editing the website, I used Adobe Pr and Audacity to adjust the length and the audio of the final videos, Ivan did the coding and used java functions to control the audio of each video. Jaime shot the plot videos and she also played a role in those videos.

Challenges and Learnings:

The biggest challenge is to shoot video while walking; for me, it’s editing the videos. Some of the scenes were shot many times.

Week 8: Video Project Proposal (Lu & Syed) Ellen Yang & Carlos Ivan Cornejo & Jamie Zaharoni

The video we’ll be creating is educational. We hope to tell people the danger of Internet by telling the story of the main character A (haven’t decided the name yet)

The main plot:

A (the character) occasionally gets into a killer website. A is half sacred and half curious. Out of curiosity, A pays a small amount of money to the website to kill a cat — A doesn’t really think this is gonna happen. However, a few days later, A receives an email that contains the photos taken as proof of killing a cat. A is totally scared and never gets into that website ever since.

Warning at last: DO NOT DO ANYTHING ILLEGAL ON THE WEB! This is dangerous and may lead you to danger.

In the video, one of us will play the role of A. However, we are not going to show the face of the main character. All that will appear in our video will be a computer and the content on the screen. Also, we’ll not kill a cat (this is just a plot need) for recording. We will cut some paper and put mosiac on to pretend there is a cat body.

Audio Project (Lu & Syed) Ellen

Description: 

This audio project is done by me, Raza Haider Naqvi and Zain Majid. The audio is inspired by a video on YouTube: https://www.youtube.com/watch?v=LKBNEEY-c3s&feature=youtu.be that tells the story of a 3-year-old Syrian refugee who tried to travel across the Mediterranean Sea but drowns dead in his journey. In our audio, we describe two situations of Syrian people: before and after the war; the happy life they had is of strong comparison with the tragedy happened later: the gunshot, the bomb, the fleeing… In that way, we want the viewer to sympathize the Syrian refugees and help them by donating to the tree websites listed at the bottom of the website.

The website part is simple. We include a photoshopped picture, several texts and three website logos that contain the website links.

Website Link:

http://imanas.shanghai.nyu.edu/~rhn225/week7/audio/audiopro.html

Discussion of the Process:

At first, we use audacity to make our audio. The process is quite smooth, but the time of the whole audio is too long: it was originally 5:46 minutes, and later we shorten it into 4:46 minutes. The audio part takes us 2 days to complete.

The website part is the truly hard part. First, we want the texts to be typed with a typing animation, but the paragraphs cannot be typed with CSS animation; we tried js but failed. Second, we would like the audio to be played along with some words appearing at the same time, which also needs js animation; but we finally give up because we don’t know how and we think that the visual part might affect the experience of the viewer to listen to our audio.

Challenges and Learnings:

The main challenge we meet is the typing animation. First I learned how to use audio tags to switch audio source (we wanted the typing sound as well with the typing animation) although we didn’t use it at last; also I reviewed how to use CSS to create typing animation. In this process, I also learned a small skill called “auto scroll down” by using js function. On the other hand, I learned how to use audacity.

What would be changed if I do it again?

I still want to add the typing animation using js function. That’s very cool.