Final Write-up

Anxiety, as a standalone condition or a part of one’s bigger mental health condition, is one of the most prevalent mental disorders amongst children and adolescence. According to World Health Organization (WHO), at least one in 13 people struggles with some form of anxiety disorder.

The advent of technology has opened up more avenues for coping with anxiety. Virtual reality, a medium that is becoming increasingly accessible, offers an immersive experience that has the potential to transport one’s mental state to a calmer place. It is this potential that I seek to explore in my virtual reality experience, A Walk in the Park (AWP).

AWP is a virtual reality experience that transports the user to an all-white flower garden, which colour fades in as he becomes calmer. At the virtual garden, the user will be welcome by a guide, in a form of a text hovering in the virtual world. There will also be a sound of bell in playing in the background. The guide will instruct the user to make a circle on the trackpad of HTC Vive’s controller as they hear the sound of the bell. As the user makes circles on the trackpad, the colour of the garden will fade in. If the user miss some circles, the colour will simply fade out a little.

More than a tool to calm one’s nerve, AWP is an exploration of what being “calm” really means. Throughout the building process, the question of what “calm” means and how calm is manifested in the physique were repeatedly asked and challenged. Is calm slower heartbeats? Slower breaths? Higher concentration ability? A mix of all the above?

Ultimately, I look forward to seeing AWP being used not only as a coping tool, but also as a platform which future researchers and developers can build onto when they seek to answer questions around the notion of calmness and healthy mental state.

Capstone Video:


R-UI Final

For the finals, I decided to make a personal blog for my brother. He actually approached me to ask if I could make a mock-up of a blog for him. I then thought, well, perhaps this could be my final’s project.

The main design inspiration came from looking at his class notes. I’m in several classes with him (surprise, he goes to NYU SH, too!) and he wouldn’t go to school without his yellow highlighter or a pen to highlight and underline his notes, so I figured I’d use highlighted texts as a part of the blog.

I think the biggest challenge of working on this is knowing not to make the page, especially the article page feel cluttered, but also not too bland or cold. I don’t think I’ve achieved that nice balance but I think I did okay in making it feel not as cluttered.

The link to the github page is:

Capstone Project Progress

So far…

1. Sensors  

At this point, I have tried several sensors and several different ways of placing the sensors. I elaborated this in greater details in last week’s post here . I’m still yet to try the MindWave headset that Leon suggested. This will definitely be one of my top priorities.

2. 3D Models 

I actually began working on the 3D models of the flowers much earlier in the process. However, Prof Naimark suggested that I focused on the sensor part of the project first. This advice turned out to be very valuable as I did run into several challenges whilst figuring out sensors. The challenges ranged from not getting the data I wanted to not actually having the device in my possession.

This week, though, after getting quite a meaningful progress on the sensor part, I plan on resuming my work with the 3D models. Below is one of the models of I would like to continue working on.

3. Connecting Arduino to Unity 

To connect Arduino to Unity, I initially did it without any Unity plugin, as demonstrated on this website. I tried using a potentiometer on the Arduino side and a preset 3D world example on Unity. Whilst this worked just fine, I did run into several errors that, I have to say, made me very nervous. In response to this, I figured I’d give Uniduino, the Unity plugin, a try.

Setting up the Arduino and Unity using this plugin turned out to be much easier. There are some points I had to constantly remind myself of, i.e. using StandardFirmata library on Arduino.

In testing out the plugin, I went through several stages. They were:

1. Testing out using the built-in LED

2. Testing out using a potentiometer to rotate a 3D cube in Unity

3. Using a potentiometer to change the colour of the 3D cube in Unity

I didn’t run into any meaningful problems during the first two stages, but I couldn’t quite figure out how to make the colour change to be gradual in the third stage. I suspect I need to make the variable change smaller, but when I tried that, I still couldn’t quite create a gradual change of colour (fading in) of the cube. This is something I would like to further explore.

Looking back at my schedule…

It seems that I’m quite on time. This week, as scheduled, I will have finished testing all the sensors I want. An exception to this would be the 3D models that I haven’t quite finished because, as Prof Naimark has anticipated, sensor testing would take a while.



The final address book I have for the midterm is as below (link on Github):

In the pre-midterm blog post, I mentioned several actions I hope my final product will be able to perform. They are:

The user will be able to sort the contacts alphabetically (descending or ascending)

The user will be able to filter the contacts based on the location of the contact

The user will be able to search for a contact

The user will be able to ring the contact on different apps through this one contact book.

The user will be able to zoom in (make the font size of the contact) and show the contact’s picture by press-and-holding the contact’s name

Whilst I’m for sure excited that I managed to tick all the boxes (that I created myself), I wonder if it’s because I kept my goal rather narrow. But when I finished completing the goals I have set forth myself, I realised that it actually gave me room to add some other elements to the address book.

1. Night Mode 

From my research, I learnt that not a lot of address book offers customisation just for the address book. The look of the address book, especially when you use the mobile’s default address book is often dependent on the mobile’s entire theme. This was something that I knew I would like to address if I still have time to work on the project (which I did). So then I reacted a night-mode view option.

I used CSS’ filter feature to modify the theme, that’s triggered by the change of React state, but I didn’t realise it would lay the filter on all the elements, including the profile photos. To handle this, I created a class that’s added to those elements, that will keep the profile picture unfiltered whilst the rest of the elements change in colour.

Continue reading

User-testing 1: Capstone

The main purpose of the first user testing is to test

1. the distance between the user and the sensor that the user finds comfortable.

2. how well the sensor picks up the breathing (inhale/exhale)

3. whether the basic meditation guide I have is easy to follow. The full meditation guide will be based on triangle breathing method, where the participant is asked to inhale for 3 seconds, hold the breath for 2 seconds, and exhale for 3 seconds. During this user testing session, I asked the participant simply to: inhale for 3 seconds, hold the breath for 2 seconds, and exhale for 3 seconds.

The sensor I tested was DFRobot’s Piezo Vibration Sensor. I had six testers trying out my sensors. Here are the findings related to the 3 points above.

Continue reading

Pre-Midterm Assignment

Physical Address/Contact Book 

My family used to keep physical contact books, at least until we started turning to our mobiles to keep track of our contacts. One of the things I really like about physical contact books is its alphabet tabs which makes it easy to go a certain alphabet index whenever I had to find a contact. Whilst modern mobile’s search filter is arguably more powerful, I personally tend to only remember the first letter of the people I need to contact. At least on my mobile, the search feature returns all contacts that contain the alphabet I’m searching for, which on my end, makes sorting through the candidates of the actual person I want to contact a little harder. Moreover, although there is a virtual alphabet tab on the side of the screen, it’s still a little smaller (I sometimes click the wrong letter) compared to those the physical contact book has.

Continue reading

Dynamic Forms.

Unlike previous weeks’ homework, this time I began with the CSS side of the interface first, with placeholder contents in it. I personally decided to do that because I couldn’t quite wrap my head around the concept of dynamic forms in the beginning, so I figured it’s better to do something, i.e. getting the CSS side done, than nothing.

Afterwards, I began passing forms into both the input and output components (input components are the forms whilst output are the corresponding fields in the poster). This was quite easy so I moved on to passing the values from the input component to the parent (so that it can pass it to the output component). However, I quickly realised that since the input fields are basically one component, when updating one field, the other fields will show the same values… which is quite cool except that’s not the effect we’re going for here…

Although the idea of calling functions and passing props are quite easy to grasp, understanding where to pass what wasn’t exactly as easy. This is when I really had to go back to the drawing board and learn what form inputs are. I tried to learn what attributes a form input has and well… there are quite a number of them and I am not quite familiar with them. I have to thank for helping me figure out what props to pass to which attributes and eventually helping me figure out how to solve this week’s assignment. There are a couple of modifications that I did to the codes suggested on that page, though. I received an error message when I created a variable and pass the value into that variable in the App.js. I am yet to figure out why that is the case, but my guess is because the name attribute of my input isn’t a set value but instead a props I pass into the input from the parent, so I moved the variables into the Input.js and pass them to the parent as I invoke the onChange function in the Input.js. It worked!

Below is my final take on this week’s assignment:

Week 3 – Event Handling

link to Github 

Similar to last week’s assignment, I approached it by working on the JS side first before diving into the CSS side. I started working on my assignment during class on Wednesday. I made each button trigger different functions that update the state of the button according to which button is clicked. However, I figured that the code wasn’t the best code, namely for the multiple functions. Rune then suggested a different (much better!) way of handling events, by passing the props into functions in the Button component, and then have it invoke the function in App.js. 

From there, I did the CSS work. Again, the Chrome developer tools came in handy.

Assignment 1: NYTimes Webpage

This week, we’re to recreate a section of NYTimes webpage by reusing React.js components.

Below is my take on this week’s assignment (on Github):

GIF of Assignment1


Initially, I created 3 different components (date, article (consisting of title, body, and the author’s name) and image). I started with creating blocks of components in React and slowly swapping the placeholder content for the actual content. 

I found building this page slowly from bottom up to be very helpful in different ways. Firstly, it became easier to troubleshoot problems, especially the ones related to styling. While React.js gives error notifications and thus is really great for catching errors, styling is another challenge because while the code doesn’t contain any errors, it might not be rendered into what I had imagined it to be. It took a while to get the styling situation sorted out, but I was quite happy with the result.

However, although this solution works, later Rune suggested that I combined the three components (date, article, and images) together into one big component.

while combining them together in the .js file isn’t a lot of work, tidying up the CSS is a bit of a work. This is where the React developer extension came in very handy. When my normal developer tool wasn’t able to pick up the CSS elements of this page, the React Chrome extension was able to help me identify where my padding sizes have gone wrong, etc.

All in all, it was definitely a learning curve. Whilst doing this project, I really wished I had known about the for loops and array in React.js, or things I know quite well in p5.js for instance,(which I suppose we’ll cover in the coming weeks), because I would imagine they would make the code a little bit neater and readable than what I have at the moment.

Week 1: What’s Motion

What’s motion design

To me, motion design is a field where one creates visual work that moves in a meaningful but also pleasant manner. I like to think that while it doesn’t necessarily mean that everything has to address the big questions or problems in life, the way the graphics move needs to somewhat represent/mean something as opposed to being entirely arbitrary. This meaningful-ness will hopefully allow the audience to understand the message that the designer tries to create or perhaps set the tone for a message the designer is about to share with them.