Capstone Project Progress

So far…

1. Sensors  

At this point, I have tried several sensors and several different ways of placing the sensors. I elaborated this in greater details in last week’s post here . I’m still yet to try the MindWave headset that Leon suggested. This will definitely be one of my top priorities.

2. 3D Models 

I actually began working on the 3D models of the flowers much earlier in the process. However, Prof Naimark suggested that I focused on the sensor part of the project first. This advice turned out to be very valuable as I did run into several challenges whilst figuring out sensors. The challenges ranged from not getting the data I wanted to not actually having the device in my possession.

This week, though, after getting quite a meaningful progress on the sensor part, I plan on resuming my work with the 3D models. Below is one of the models of I would like to continue working on.

3. Connecting Arduino to Unity 

To connect Arduino to Unity, I initially did it without any Unity plugin, as demonstrated on this website. I tried using a potentiometer on the Arduino side and a preset 3D world example on Unity. Whilst this worked just fine, I did run into several errors that, I have to say, made me very nervous. In response to this, I figured I’d give Uniduino, the Unity plugin, a try.

Setting up the Arduino and Unity using this plugin turned out to be much easier. There are some points I had to constantly remind myself of, i.e. using StandardFirmata library on Arduino.

In testing out the plugin, I went through several stages. They were:

1. Testing out using the built-in LED

2. Testing out using a potentiometer to rotate a 3D cube in Unity

3. Using a potentiometer to change the colour of the 3D cube in Unity

I didn’t run into any meaningful problems during the first two stages, but I couldn’t quite figure out how to make the colour change to be gradual in the third stage. I suspect I need to make the variable change smaller, but when I tried that, I still couldn’t quite create a gradual change of colour (fading in) of the cube. This is something I would like to further explore.

Looking back at my schedule…

It seems that I’m quite on time. This week, as scheduled, I will have finished testing all the sensors I want. An exception to this would be the 3D models that I haven’t quite finished because, as Prof Naimark has anticipated, sensor testing would take a while.



The final address book I have for the midterm is as below (link on Github):

In the pre-midterm blog post, I mentioned several actions I hope my final product will be able to perform. They are:

The user will be able to sort the contacts alphabetically (descending or ascending)

The user will be able to filter the contacts based on the location of the contact

The user will be able to search for a contact

The user will be able to ring the contact on different apps through this one contact book.

The user will be able to zoom in (make the font size of the contact) and show the contact’s picture by press-and-holding the contact’s name

Whilst I’m for sure excited that I managed to tick all the boxes (that I created myself), I wonder if it’s because I kept my goal rather narrow. But when I finished completing the goals I have set forth myself, I realised that it actually gave me room to add some other elements to the address book.

1. Night Mode 

From my research, I learnt that not a lot of address book offers customisation just for the address book. The look of the address book, especially when you use the mobile’s default address book is often dependent on the mobile’s entire theme. This was something that I knew I would like to address if I still have time to work on the project (which I did). So then I reacted a night-mode view option.

I used CSS’ filter feature to modify the theme, that’s triggered by the change of React state, but I didn’t realise it would lay the filter on all the elements, including the profile photos. To handle this, I created a class that’s added to those elements, that will keep the profile picture unfiltered whilst the rest of the elements change in colour.

Continue reading

User-testing 1: Capstone

The main purpose of the first user testing is to test

1. the distance between the user and the sensor that the user finds comfortable.

2. how well the sensor picks up the breathing (inhale/exhale)

3. whether the basic meditation guide I have is easy to follow. The full meditation guide will be based on triangle breathing method, where the participant is asked to inhale for 3 seconds, hold the breath for 2 seconds, and exhale for 3 seconds. During this user testing session, I asked the participant simply to: inhale for 3 seconds, hold the breath for 2 seconds, and exhale for 3 seconds.

The sensor I tested was DFRobot’s Piezo Vibration Sensor. I had six testers trying out my sensors. Here are the findings related to the 3 points above.

Continue reading

Pre-Midterm Assignment

Physical Address/Contact Book 

My family used to keep physical contact books, at least until we started turning to our mobiles to keep track of our contacts. One of the things I really like about physical contact books is its alphabet tabs which makes it easy to go a certain alphabet index whenever I had to find a contact. Whilst modern mobile’s search filter is arguably more powerful, I personally tend to only remember the first letter of the people I need to contact. At least on my mobile, the search feature returns all contacts that contain the alphabet I’m searching for, which on my end, makes sorting through the candidates of the actual person I want to contact a little harder. Moreover, although there is a virtual alphabet tab on the side of the screen, it’s still a little smaller (I sometimes click the wrong letter) compared to those the physical contact book has.

Continue reading

Dynamic Forms.

Unlike previous weeks’ homework, this time I began with the CSS side of the interface first, with placeholder contents in it. I personally decided to do that because I couldn’t quite wrap my head around the concept of dynamic forms in the beginning, so I figured it’s better to do something, i.e. getting the CSS side done, than nothing.

Afterwards, I began passing forms into both the input and output components (input components are the forms whilst output are the corresponding fields in the poster). This was quite easy so I moved on to passing the values from the input component to the parent (so that it can pass it to the output component). However, I quickly realised that since the input fields are basically one component, when updating one field, the other fields will show the same values… which is quite cool except that’s not the effect we’re going for here…

Although the idea of calling functions and passing props are quite easy to grasp, understanding where to pass what wasn’t exactly as easy. This is when I really had to go back to the drawing board and learn what form inputs are. I tried to learn what attributes a form input has and well… there are quite a number of them and I am not quite familiar with them. I have to thank for helping me figure out what props to pass to which attributes and eventually helping me figure out how to solve this week’s assignment. There are a couple of modifications that I did to the codes suggested on that page, though. I received an error message when I created a variable and pass the value into that variable in the App.js. I am yet to figure out why that is the case, but my guess is because the name attribute of my input isn’t a set value but instead a props I pass into the input from the parent, so I moved the variables into the Input.js and pass them to the parent as I invoke the onChange function in the Input.js. It worked!

Below is my final take on this week’s assignment:

Week 3 – Event Handling

link to Github 

Similar to last week’s assignment, I approached it by working on the JS side first before diving into the CSS side. I started working on my assignment during class on Wednesday. I made each button trigger different functions that update the state of the button according to which button is clicked. However, I figured that the code wasn’t the best code, namely for the multiple functions. Rune then suggested a different (much better!) way of handling events, by passing the props into functions in the Button component, and then have it invoke the function in App.js. 

From there, I did the CSS work. Again, the Chrome developer tools came in handy.

Assignment 1: NYTimes Webpage

This week, we’re to recreate a section of NYTimes webpage by reusing React.js components.

Below is my take on this week’s assignment (on Github):

GIF of Assignment1


Initially, I created 3 different components (date, article (consisting of title, body, and the author’s name) and image). I started with creating blocks of components in React and slowly swapping the placeholder content for the actual content. 

I found building this page slowly from bottom up to be very helpful in different ways. Firstly, it became easier to troubleshoot problems, especially the ones related to styling. While React.js gives error notifications and thus is really great for catching errors, styling is another challenge because while the code doesn’t contain any errors, it might not be rendered into what I had imagined it to be. It took a while to get the styling situation sorted out, but I was quite happy with the result.

However, although this solution works, later Rune suggested that I combined the three components (date, article, and images) together into one big component.

while combining them together in the .js file isn’t a lot of work, tidying up the CSS is a bit of a work. This is where the React developer extension came in very handy. When my normal developer tool wasn’t able to pick up the CSS elements of this page, the React Chrome extension was able to help me identify where my padding sizes have gone wrong, etc.

All in all, it was definitely a learning curve. Whilst doing this project, I really wished I had known about the for loops and array in React.js, or things I know quite well in p5.js for instance,(which I suppose we’ll cover in the coming weeks), because I would imagine they would make the code a little bit neater and readable than what I have at the moment.

Week 1: What’s Motion

What’s motion design

To me, motion design is a field where one creates visual work that moves in a meaningful but also pleasant manner. I like to think that while it doesn’t necessarily mean that everything has to address the big questions or problems in life, the way the graphics move needs to somewhat represent/mean something as opposed to being entirely arbitrary. This meaningful-ness will hopefully allow the audience to understand the message that the designer tries to create or perhaps set the tone for a message the designer is about to share with them.

Week 1: A response to Philip Galanter’s What’s a Generative Art

When reading this piece, I was ready to jump into reading his take on what a generative art is. However, I was introduced to the discourse around the definition or rather, the making of the definition of generative art. I find it really interesting that he uses humour to actually question things we sometimes take for granted, for instance, when he talks about “generative art is an art, that’s generative”. While this is indeed funny and could sound frustrating to some, it does beg the question of “what is art?” “what is generative?”. He gives us some space to think and wonder on our own before introducing his take on what generative art is in the next part. That kind of freedom to think is something I really appreciate in this piece.

I also appreciate the fact that he addresses some of the artwork that I did think would fall under the category of generative art, if I was to follow his definition, i.e. Pollock’s paintings (or rahter the ones he’s famous for).

Lastly, I really enjoy his take on the role of science in creating generative art. I think often, people think of art as the exact opposite of what science is. However, generative art shows that there are points where art and science (or maths) do intersect. Instead of seeing them as opposing forces, we are now able to see them as components that work together to create something meaningful and beautiful.


Week 1: A response to As We May Think by Vannevar Bush

Which similarities exist between what Bush describes and the computers that we use today? What did not come to fruition?

In Vannevar Bush’s As We May Think, he envisions five devices that would augment human capabilities to perform complex calculations, repetitive work, and associations. The five devices he mentioned were

Cyclops Camera an overhead camera that will photograph and record anything we see

Microfilm a desk-size encyclopaedia that houses information as much as the Encyclopaedia Britannica does

Vocoder A speech-to-text device

Thinking Machine A calculator capable of performing complex calculations

Memex Physical search engine

Of these five devices he envisions, I see that they are similar in the ones we have today in terms of its ability to perform what a human brain would take a long time doing. Moreover, these devices are also informed by the past inventions, just like what we have today. A computer these days for instance, its not created out of the thin air, but rather, a device that’s very much informed by calculators and other past inventions used to calculate. This is similar to Bush’s Microfilm which was envisioned to be informed by Encyclopaedia Britannica in terms of what it does.

I think, the conceptual idea behind Bush’s devices have all come to fruition in the forms of today’s devices. However, I do think that they took different shapes than what Bush had imagined. Memex, for instance. A search engine today is more of a digital, as opposed to a physical, invention although it is indeed access through a physical device (our mobiles or laptops). If I had to find the modern-day counterparts of Bush’s devices today, the list would look like this:

Cyclops Camera GoPro with its headband or even Snap’s Spectacles

Microfilm Wikipedia, accessed with our mobiles and laptops

Vocoder Speech-to-text softwares such as Apple’s dictations or Google Doc or Microsoft Word’s speech-to-text plugin. I think Bush mentioned that it’s envisioned to be a “supersecretary”. In that case, I’d imagine that a secretary does more than just taking notes and perhaps Amazon’s Echo/Alexa and Google Assistant would be something closer to a “supersecretary” than just a speech-to-text plugin.

Thinking Machine Our computer, perhaps? Or even a modern scientific calculator (TI-Nspire, for instance)

Memex Although Google isn’t necessarily a physical invention, I think Bush would say that it does what he imagined Memex would do.