Inflatables Class 3 HW – Jimmy Kim

Read these 2 websites and write about your feelings about these two products and how they are using air inflation in their design.

The two products are an inflatable rolling suitcase designed by Nicola Staubli and an inflatable LED lamp constructed by Theo Moller and Ingo Maurer. The rolling suitcase called Zippelin uses an inner tube that is sewn to a big piece of tarp. The pressurized air is then able to replace the big, bulky structure of metal parts. They can be deflated and take up only a fraction of a normal traveling bag. Staubli derived the idea from foldable bikes that save a lot of space. I think this idea is cool and interesting but not practical. When I’m traveling, I am looking for something sturdy and reliable to hold my belongings – I don’t think I would trust an inflatable bag. Also, there is a funny user comment at the end of the article, in which someone replies to someone else remarking that the bag is 620 USD (“would you say, then, the price feels somewhat inflated?”). The price is definitely not worth it.

The second product is the LED lamp that one unrolls, blows up and connects the wiring. The sensor switch on one side of the LED strip radiates the reflective side of the tube and scatters indirect lighting around the room. The lamp can then be hung on the ceiling or leaned against the wall. I think this product is much more practical and appealing.

 

Week 14: Where do you want to wake up tomorrow? (Sarah)

Title: Where Do You Want to Wake Up Tomorrow?

Partner: Julia

Description and Idealization: For this project, Julia and I decided to ask people in what world or where do they want to wake up tomorrow. So we recorded people answering this question and then drew animations on the video using Premiere and Photoshop. We also asked to speak creatively on the topic, so some of the answer they gave us a quite long. The website is quite simple, but since we wanted to be consistent with the style, we decided to also use drawing for the home page, the buttons. After the home page, the website redirects the user to a page with a long drawing of a building and drawings of the people we interviewed on each window. The reason why we used drawings in our project was because wanted to inspire people to actually try to live in the world the want to live in and encourage them to do things they want to do instead of just conforming to what their situations are. Similarly, this is also the reason why we decided to have the people inside a building looking out the window, because we think this represents how people are trapped in their lives and are in a metaphorical sense looking out the window and seeing the world.

Videos: For the videos, we wanted to have the same style for all of the video and we wanted to keep it simple, so we recorded headshots of the people answering the question. Also, we asked the question on camera because we wanted to capture their thinking process. With these, we then made an introduction video to our project. The videos were edited using Adobe Premiere, where we put the videos and the audio together (since we recorded the audio separately). We used a drawing pad to sketch drawings on Photoshop for the animations. To do the animations, we had to save the drawings as a bunch of different images. Then, on Premiere, we used the animation tools to put the together and add them to the videos.

Code: For the website of our project we created 2 html pages, two css style pages, and one javascript page.

  • HTML: First we added the jQuery library and linked the html page to their corresponding css sheet and javascript file. We created a div for the home page video where we loaded the introduction video and created a function to check the video time. We then uploaded a giff for the title of our project so that would be the first thing the user sees when loading the page, but for some reason, the on click function did not work on the giff, so we created a div for a fake layer which allowed the user to click on the giff in order to continue to watch the video. Then we created another giff where we uploaded the an image of a door with a function so that when the user clicks, it redirects him or her to the main site.In this website, we also added some function using jQuery methods to change the opacity of the door when the mouse hovers over it. All of this is shown below.

For the second HTML page, we also added the jQuery library and linked the  css sheet and javascript files. We created divs for the image of the building, for each of the people we interviewed, for the audio playing on the website, for an image of an exit door, and for the viewing mode of the videos where we put little cross on the top right corner to exit the videos. Then. we created some functions with jQuery for the opacity of the images when hovering over them.

 

The reason why we decided to use jQuery is because this made our code way simpler and easier to program, but it was a bit confusing, specially because of the syntax that we were not familiar with. We also used jQuery here to gradually change the background color of the page, which we learned in a tutorial on this website.

  • CSS: For the css of the home page, adjusted the width and height of each div and gave them their appropriate characteristics, such us cursor style, background color, visibility modes and their position. We also used keyframes to give fading in and fading out animations to the giff.

    For the other css style sheet, we defined several classes for the different colors of the background and same as in the other style sheet, we gave the according characteristics to all of our id’s, but also created another keyframe animation to zoom the videos in whenever the user clicked on the image.

 

  • Javascript: We had the following functions:
  • To redirect the home page to the main page and vice versa.
  • To hide the home page giff and the fake layer, and make the video visible. Also used jQuery to give the fade in animation to the video.
  • To check if the current time of the video is greater than the full length of the video and if so, to make the enter door visible and the video hidden.
  • To play the videos. But since we did not want to create separate functions for each video, we decided to use jQuery. Here, we appended the videos using their id and make them visible. We also added css animation for the opacity. We also created variable in order to position the video in the exact center. And lastly, this function also plays the videos.
  • The last function we created was to exit the video, where the background music would start playing again, the video would be hidden and it would pause, and the video id would be removed so that we could append a different video when clicking on a different person.

Overall, Julia and I are very happy with out project but we wish we could have added more interactivity between the user and our project. We also wanted to add a page where the user could get to answer where he or she wants to wake up tomorrow, but we simply could not figure out how to do it.

 

The Story of NYU Shanghai as told through Mushrooms, brought to you by Guille and Marj

The Story of NYU Shanghai as told through Mushrooms
Collaborators: Marjorie Wang and Guillermo Carrasquerro
Tech: Google Tango, Lenovo Phab, Unity3D, MagicaVoxel, Blender
Design Doc PPT
Update PPT

The premise: NYU Shanghai has collapsed many years ago, in year 2019. You have come to visit the ruins of the university, which is now overgrown with plants and mushrooms. To hear the story of NYU Shanghai, you must plant a mushroom, and wait for it to grow. When the mushroom becomes fully sized, you can experience the story of NYU Shanghai.

mushone
The process:
Guillermo and I are both quite terrible at scripting so we wanted a premise that was more heavily focused on the visuals. So, we began by creating mushroom assets with the incredible free software MagicaVoxel. With this simple tool, we were able to easily create the environment, as well as the post-mushroom visuals, in a consistent design scheme.

We wrote a script (dialogue) read by beloved professor Clay Shirky. The script:
Clay: Welcome to the Interactive Media Arts floor. IMA was the former creative hub of NYU Shanghai.

(Instantiate Mushroom House)
mushroomhouse
My name is Clay and I will be your trip guide today.

IMA was founded as an sister program of the ITP graduate program in NYC. Its primary focus was integrating technology with the arts. IMA was born in 2013 alongside NYU Shanghai’s first-ever class. At the beginning, IMA only had 4 students

(transform 4 to 300)
A department of 4 students quickly grew, with over 300 students taking IMA classes in 2017. In these years, the link between art and technology was shown to be a vital aspect of professional development.

(visual)
For example this project, developed by one of the very greatest of all IMA students ever, Marjorie Wang, bridged Virtual Reality with Artificial Intelligence. This project was adopted by Google to enhance the interview process.

(mini oasis)
In 2017 the lab for Augmented and Virtual reality was created. This lab has become the place to explore and develop technological advancements in AR and VR, without boundaries or guidelines, as a place for self expression and innovation.

At the peak of IMA’s success the unimaginable happened, the great technological apocalypse descended from heaven and destroyed all electronics. Due to the diseased soul of society and its reliance on technology, in 2019, NYU Shanghai collapsed and vanished.

How the script works: Christian, you already know this but I’m very proud that I scripted everything so I will explain my beautiful C# script. Big thanks to Sean G. Kelly for being there for me, for emotional support and scripting support. Keep in mind that I’m terrible at scripting so there may be unnecessary lines.
With my script, I wanted the player to be able to plant mushrooms. After a certain amount of time, the mushrooms would “grow large enough” and a mushroom house instantiates alongside a Clay 3D model and AudioClip.
To do this, I created public Transforms for every gameObject and AudioClip I wanted to instantiate in certain times. Then, I set booleans for each instantiates. This would allow me to only instantiate one object so there wouldn’t be infinite clones of each object trying to instantiate at each time I set. Then, I created mytime, which is the timer that begins after the player plants the first mushroom. Mytime is += Time.deltaTime. Then, for each time I wanted to instantiate, I set “mytime” greater or equal to the time, and set the instantiate to false. Then, I would instantiate my Transform or AudioClip, and then set the instantiate to true so only one object would be instantiated.
For some mysterious reason, I was unable to destroy the objects, so I just created animations for the objects and set the last two keyframes as non-active.

For the splashscreen, I created three GUI canvases, with a series of 2D images created using MagicaVoxel, to make animated GUIs. I created a 1 voxel tall plane and replaced different colors with each new frame.
guione

guithree

guitwo

Augmented Reality Storytelling by Tyler Rhorick

Reading Response

http://ima.nyu.sh/documentation/2017/02/15/mixed-reality-story-and-response-by-tyler-rhorick/

Blippar Intervention

http://ima.nyu.sh/documentation/2017/02/22/i-am-limitless-animation-ar-tyler-rhorick/

Live Broadcast AR

For the Live Broadcast AR assignment, I was part of the team that tried to augment the IMA equipment room to tell the story of a student who was murdered by a cat for turning in equipment late.

Personal Reflection– Overall, the process of converting the space to tell our story with the green screen went pretty well. I would say that our biggest challenge in creating accurate scale, perspective, and lighting. As for the scale and perspective, we were able to achieve a believable enough positioning of the “victim” student after moving the camera angle and Diana multiple times, but the lighting was one thing we could never remedy. I think this means that for the future we should pay better attention to lighting conditions to give our final image a better overall effect. I think we could have figured it out if given more time, however, so I am not too sad walking away from this assignment.

Your Photogrammetry

Photogrammetry remained to be one of the most difficult assignments of the semester for me, for reasons I still cannot understand. What I was trying to do in this project was create a 3D model of the meowspace to further our story we created in the Live Broadcast, but this proved to be more difficult than anticipated because of the following challenges:

    • The real MeowSpace couldn’t be used– Because the meowspace was under modification when this project was assigned, my original plan failed. I was lucky to find a 3D model of meowspace in the lab that I ended up using, but this did cause some slight panic in the beginning.
  • Creating an accurate scan- The biggest problem with creating a photogrammetry model persisted to be difficulty in capturing images that could successfully be used by the program. I think I had a difficult time because the image I was trying to scan was pretty uniform in texture and lighting was hard to control against the surface of the structure.
  • Software- Another big problem that I had was in using the software. Even if the pictures were bad I could never figure out why the program never showed me a model after following the steps in the tech template. I had shown this problem to Christian, but we still couldn’t figure out what was happening.

Here is a folder containing all of the countless cat pictures I took trying to do this assignment.

Your Game Design Document

Here is our Game Design powerpoint.

Your Core Mechanic Documentation

Here is our Core Mechanic powerpoint.

Your MR interview with Storyboard and Scan

Because there was a misunderstanding when the groups were making their way into the green screen room, Matuez and me got split from our larger group where we had made a storyboard to play off the idea of the Sims. Because of this, we had to make a new model and storyline on the spot. To make the figure, we chose Adobe Fuse because it was quick and simple. We decided to make Vladimir Putin wearing makeup because of the recent ban on such imagery in Russia. As for the interview, it was decided that I would interview Matuez acting like Vladimir about such topics like Russia , the Ukraine, and his makeup.

Here you can see the video and Matuez’s perspective of the experience.

Immersive Sound

For the immersive sound project I decided to use Unity because my Max MSP trial gave out. To do this in unity I watched several tutorials on youtube. Basically the lesson of these tutorials was basically that you need to turn on 3D sound by changing the spacial blend. Using this technique I created a simple player and audio track of the NYU Shanghai alma matter. This player walks around the scene and the sounds get fainter as the player walks away. Here is a screen shot to see what I changed in the audio part to make it 3D.

Screen Shot 2017-05-24 at 9.49.18 PM

Final Project

Project Title: Shanghai Storylines

Partner: Mateuz

Elevator Brief: Shanghai Storylines is an Augmented Reality history experience that communicates the history of Shanghai’s Pudong area. Using their phone, the user can explore the Pudong area of Shanghai and learn more about the history that has often been untold of the area. The project imagines what Shanghai would have looked like from the NYU Shanghai academic building throughout history.

Extended Description: Shanghai Storylines is made technically possible using Unity and Vuforia. To start the experience the user walks up to an old Shanghai style window. Upon scanning the Vuforia marker, the user is introduced to the experience. The first view the user is made privy to is our imaged view of Shanghai from the NYU Shanghai Academic Building in the early 1900’s. The landscape was made in Unity.

Technology: Unity, Smart Phone, Vuforia

Development: Before we could start the project, we had to do a great deal of research concerning the history of Shanghai. To start this research, I met with Anna Greenspan, as a professor that has focused on the urbanization of China. She shared with me very interested texts talking about the historic foliage of Shanghai, which was research we used in the final project when we selected to make all of the trees broadleaf evergreen models, in accordance with Shanghai’s historic ecosystem. After this research was done, we needed a better idea of what was actually built in Pudong. Though we had the idea that it was just going to be fishing villages based on widespread “knowledge,” we still decided to research to make sure that was the correct narrative.

The first lead we got on the prior history of Shanghai came from finding a map on google images of the old Shanghai area. After researching more about this map, it showed that this map was one of very few of the area at the time and is widely considered one of the most reliable models of the area at the time.  Here is that map below:

Shanghai 1945

This map proved to be monumental in moving forward because it gave us the information that Pudong was not always called Pudong, but rather was formerly called Pootung. This information helped us find much more information about the area because this is what scholars have always referred to the area as. In searching for Pootung, we came across one book by Paul French called Old Shanghai. With a very detailed description of the Pootung area, we decided to dedicate our project to his research, which included a very colorful history of old Shanghai that included a foreign cemetery for those that died at sea, foreign occupation of a land controlled by the Chinese government, and animal warehouses that doubled in the nightly trade of prostitutes.

After we finished all of the research which took up most of our time in the first weeks to make sure that we were telling a compelling narrative, we began working on the technical side of the project. In hindsight, we should have started this part of the project way sooner because Matuez and me both had no experience in Unity nor 3D modeling.

Because of this we decided to split up the Unity work. I decided to work on getting the core mechanism of Vuforia working, while Matuez worked figuring out how to get the landscapes started. When it came to getting Vuforia working, we first decided that we wanted to markerless markers, but this proved to be more difficult than we anticipated, so we went back to using a marker. I also worked on getting the core mechanism of buttons and text boxes working so that we could communicate the story of Shanghai. While I was doing this Matuez was learning how to make terrains in Unity. He sent me a working model with the terrains started and then I watched the same tutorials to finish up the models. To modify what he gave me I decided to shape the terrain actually like that of Pudong. He had given me a square terrain, but I decided to be truer to the history we should try and get the right shape to the terrain. To do this I made a plane layer of maps from new and old and sculpted the landscape. Here is how that process looked.

Screen Shot 2017-05-16 at 1.03.41 AM

 

I also decided to add water to the scene, which I also attached a script to make the water move. In addition to this I flushed out some of the areas of the experience to give it a better sense of history like the docks, graveyard, and factory part. Here are some screen grabs of the finalized look of some of these areas.

Screen Shot 2017-05-16 at 3.46.47 AM

Screen Shot 2017-05-15 at 5.39.47 AM Screen Shot 2017-05-15 at 5.39.40 AM Screen Shot 2017-05-15 at 5.40.29 AM

All of the assets were made from other elements of the free asset store. For instance, I made docks out of extremely large and distorted pallets from a warehouse collection on unity.

In the end, our mechanisms definitely worked and I think we gave a great history of the region with the time we had. For the future, I would like to expand the historical content of the project and work to make the buttons and menus feel more integrated in the experience.

Here is a video of the mechanisms working:

The Immersive Soundscape of HATCH/宝贝 (Nicole + Saphya)

Finding Sound

Screenshot (453)

“Coins 1” by ProjectsU012 aka Happy Noise

“8-Bit Wrong 2” by TheDweebMan aka Wrong Noise

“8bit-harmony-lowcutoff-envelope” by DirtyJewbs aka Theme Music

We stuck with chiptune and 8-bit audio, to go with the oldie feel of our game.

In-game Music

Screenshot (452)

  1. Sad Noise: this sound is activated whenever a heart is removed

2. Happy Noise: this sound is activated every time the plater satisfies the qilin by scanning the Vuforia markers, giving the qilin pizza and/or coffee

3. Theme Music: this sound is constant and plays on awake on loop

 

Google Tango Spatial Sound Experiment

Using the same audio, Sean helped us make a Unity scene for the Tango in which users can walk around in a 3D soundscape and approach orbs that had an AudioSource attached. You would hear the sound clearer the closer you got to the orb and fainter the farther you were.

image

InnerChaos – Mixed Reality Storytelling Final Project – Zeyao, Shirley and Collin

Project Name:

Inner Chaos

Description:

Inner Chaos is an Augmented Reality IOS game developed by Zeyao, Shirley and Collin. The player uses the phone camera to scan the common object to equip your backpack, then uses the equipment in the backpack to fight the boss in the inner world. The player also needs to find the key item in our school so that they can get into the inner world to save the school.

Project Demo Video:

 

Project detail:

Screen Shot 2017-05-24 at 3.45.20 PM

Screen Shot 2017-05-24 at 3.45.26 PM

Screen Shot 2017-05-24 at 3.45.37 PM

Screen Shot 2017-05-25 at 12.30.30 AM

Screen Shot 2017-05-25 at 12.30.41 AM

Screen Shot 2017-05-25 at 12.30.58 AM

 

HW10 Immersive Soundscape by David, Reine and Diana

IMA side doesn’t allow me to upload the AIFF format so I just upload it to google drive.

Link for the sound: https://drive.google.com/a/nyu.edu/file/d/0BzZ6RMX2hG5HbWdXbW1FeS13Smc/view?usp=sharing

The sound is used for the beginning of our final project. First, people will hear the sound of fire. Suddenly people will hear a bird sound. Then there will be an explosion. Finally, people will hear some chicken sounds. We want to create a scene that a phoenix wants to wake up from the fire, but something wrong happens and he becomes a chicken. I adjusted the height and the distance of the sound in Max, but it became less obvious after I recorded them in Audacity.

Spacial Sound Documentation – Zeyao, Shirley and Collin

For the last tech template, we used Max 7, AudioHijack and Adobe Audition to make a spacial sound. We intended to create an environment of a public classroom. Imagine you are in that classroom, in front of you there is a student who is struggling with homework so that he is wandering around. Then behind you there are a bunch of people who are yelling and chilling out. Some people got shock by something so they are screaming. At the end, the main character is mad so he screamed. The purpose of our spacial sound is to let the audience experience the chaotic environment.

WechatIMG47

WechatIMG49

 

Link: https://drive.google.com/drive/folders/0B9t9c61LjFhkTHctOURMZXEyNjA?usp=sharing

HATCH/宝贝 Core Mechanic (Nicole + Saphya)

Screenshot (451)

The Basics:

The foundation of our app is controlled by four scripts: Time Management, Heart System, Camera Button, and Tracking Event Handler.

Time Management controls the active states of the hearts in the GUI. Using an array, this script will set the active state of each heart in the array to false in thirty second intervals through communicating with a function in the HeartSystem script called HeartDeletion().

https://github.com/saphya-council/hatch-spr17/blob/master/timemanagement.cs

Camera Button is connected to the lower button in the GUI that activates the ARCamera. This script is attached to both the coffee button and the pizza button and takes several parameters associated with the button identity to pass into the Tracking Event Handler script. If OnTrackingFound() returns true for the specified image target, then HeartAddition() from the HeartSystem script will initialize.

https://github.com/saphya-council/hatch-spr17/blob/master/camerabutton.cs

Tracking Event Handler cross compares what is being scanned in the ARCamera with the image target the CameraButton script has sent it. This is to ensure that the camera doesn’t deinitialize after scanning just any image target in the database, and to make sure that the qilin receives the correct  icon popup in the game.

https://github.com/saphya-council/hatch-spr17/blob/master/trackingeventhandler.cs

Heart System is connected to both the CameraButton and Time Management scripts, and controls the qilin’s animations and appearance. If the qilins hearts reach zero, the qilin will turn into a pile of bones, otherwise it will be a cute qilin.

https://github.com/saphya-council/hatch-spr17/blob/master/heartsystem.cs

Future Plans:

We tried to incorporate multiplayer compatibility so that couples can chat and take care of the same qilin. We first did this through the Unity Multiplayer Networking tutorial, however it did not work between PC and mobile phone. This is because there was no server to connect the two devices. Next, we explored Photon Unity after getting advice from Sean. This was better, because Photon uses a cloud that can be accessed just as an API is used. In the short time that we had, we prioritized perfecting the Vuforia camera and the appearance of our app over the networking component. In the future, we hope to finish our work on that part of the app.