The year of 2018 marks the dawn of mobile Augmented Reality (AR). With new technologies like ARKit and ARCore being widely available to the developers and users, our phones can see the world like never before. Phones know where they are, and they track themselves spatially. This allows the user to interact with objects in the virtual environment through the screen as if they exist in the real world. It opens up new possibilities for interactive experiences. The problem, however, is that the screen has a limited size. In virtual reality, users can turn their heads immediately to see interactions happening around them, which is not possible in mobile AR. ProjectAR seeks improvements in this direction.
The experimental interactive game “ProjectAR” aims to explore extended immersion with the help of projection mapping. The player sees a virtual world through the phone working similar to magnifying glass and need to pay attention to the floor which is projected using a projector installed on the ceiling to look for targets to interact with. The game itself is simple: the player needs to find as many sprites as possible in a pond of fog. The little sprites inside the screen won’t move since they are trying to hide from the player. But the sprites may move when outside of the screen, and they will leave a trace on the floor. After figuring out where the sprites are, the player can blow into the phone to dispel the fog away and reveal the sprites underneath, and click to collect them.
ProjectAR aims to experiment and explore what works and what doesn’t with mobile AR and tackle some of its weaknesses using existing technologies. The initial idea for this combined platform was to build a spatial Twitter visualization, however, during user testing, it does not trigger interactions that much. Games are going to put the test of interactions and tracking capabilities to the extreme and encourage participation.
ProjectAR can be set up as an interactive installation using one ARCore enabled mobile phone, one computer, and one projector.
Project video: https://photos.app.goo.gl/riAov3wroUJpAAOq1
Presentation Slides and project files: https://drive.google.com/open?id=1AXm-VZ4UDwWUgeLyBy5C-Wemz_LrMkeU
IMA is a unique major, and every IMA student is unique. As graduation is approaching, every senior is preparing their portfolio so that they could show their best to the world. However, the examples found on the Internet are usually more specific in one area.
For IMA students like me who are not sure what to do, but capable of doing a few different kinds of jobs, the portfolio design faces challenges similar to full-service advertising agencies that provide a large variety of works. This portfolio is also a React remake of my previous Vue.js version in order to compare what’s good and what’s bad of both frameworks.
During this period of time, I descriptions and storyboards to a few friends outside of China for some remote user testing and feedback.
One game designer suggested that since my main focus was on the combination of AR + projection mapping, the content isn’t that important. But the subject I picked for this new combined media isn’t that user participatory. In other words, the content I am making at the moment isn’t interesting and attractive enough. We talked for a few hours and figured out a new game that could utilize most of the interaction I programmed already and further emphasizes the new possibilities introduced by this combination.
The new game is still based on the same setup: one ARCore enabled mobile phone, one computer, and one projector. The main goal of the game is to catch the little glowing sprites running around under the fog. The fog is displayed on the phone, and will also leave a white glow in the physical room using projection. The sprites will leave a golden trace when they run around. Users need to blow into the phone to puff the fog away. The sprites uncovered will freeze for a second, and the user has to get close enough and click to capture, or they will hide back into the fog. The sprites will randomly move after idling for a while, so it requires the user to pay attention to the projected floor. User’s scores are used to compete with each other.
I could say this testing session changed almost everything of my project, but I feel better working on the new one. But this means that I need to work extra hard to finish it in time before the presentation. One thing I learned is that it is really helpful to test around with people outside of my own circle with a different background.
The portfolio is a personal thing. When you are showing your portfolio to others, it should be talking just like you, not like anyone else. IMA students are highly diverse in skill sets, which makes it harder to design.
- R/GA: https://www.rga.com/work
R/GA is a full-service advertising company. Like us, they provide services in many different fields. Having to face different kinds of clients, their works page has comprehensive features to help viewers choose and filter for their needs. Each project case study has a short write-up, a slideshow of images or videos, tools to share on social media and related works.
For my own need, it’s good to provide filtering, but it’s way too complicated.
- Shirley Huang: http://shirleyhuang.me/
Shirley is a former IMA student who’s portfolio website has been recommended by professors as a good example. As someone with a strong visual design background, her website has her strong personal styles. The works are categorized by their types, and then further tagged. Since her works are usually interactive, hovering the mouse above will reveal a preview footage.
The issue is there is no priority and control over what to show for a specific group of viewers.
- As a user, I can see a few featured projects.
- As a user, I can see a list of projects in an order of importance.
- As a user, I can see the title of the project and a big thumbnail.
- As a user, I can get an URL made specifically for me so the results are tailored to my need. (academic, job search, client)
- As a user, I can see details of a project by clicking in, seeing images, videos, descriptions, and even demos.
- As a user, I can go to relative projects by clicking on the category or specific tag.
- As a user, I can view the website with no issues on my phone.
For the final project, I’m going to rebuild a personal portfolio website for myself using React, as a comparison to my previous version, which is made in Vue. Front-end developers can never settle down to an agreement with each other on such topics. The only way for myself to better understand the design differences of different frameworks is to actually use them, so I can pick the right tool for the right job.
Each IMA student is different yet sharing one core: cross-disciplinary. If you try to Google portfolio designs, almost all of them are focusing on one area. We need to build a better UI to filter out the works in specific areas and show the wide range of areas we work on while being able to manually select a few of them for a certain employer by giving them a special hidden category or tag.
This is my contact book that focuses on basic features. But it has full-text search features combined with filters.
Users won’t need to input precise information, just information that they need to describe someone, and searching for that text will bring the contact out.
Other designs planned: the user’s interactions with the contact will be recorded and used to generate the “recent” section, as well as sorting the entire list in such order. Users will also be able to see the list in the order they add the contact. Users will be able to override the default random color, or just upload an avatar for the contact to help to memorize and quickly locate someone.
The milestone for the two-week progress is to have the basic networking setup and test with the projector. So far, the networking has been done and tested, but the capstone prototyping space wasn’t allocated in time for the installation. So instead of testing the prototype I made, I asked users to try a bit with a few mobile AR demos. I picked this because not a lot of people have experience with mobile AR, specifically, ARCore since it’s only available on very few devices.
The interactions of the AR platform is spatial movements and tapping on the screen. This is consistent with almost all ARCore apps, including my proposed project.
Since it’s something new to most of the people, I asked them some of the questions below, and here are some of the answers I think that is expected, interesting, or completely surprising.
- What do you think about it?
“It’s cool.” This is the most common answer I get.
Actually, the feedback I get treats it more as a trick, other than something that is really useful. The limited use case and availability are going to be the issue. Though my proposed networking feature sounds like something fun.
- What do you think works well?
It is technically working really well. Some of the testers have tried and experienced Project Tango, and the fact that mobile AR doesn’t need additional sensors amaze them. The spatial tracking is fairly quick and accurate, and the ability to pin something spatially to a textured surface is really cool.
- What doesn’t?
One would think it is something that works when the app is open but it actually requires the user to spatially move a bit. By telling the user to move, the user would usually just swing the arm a bit, which is not enough for the tracking to recognize surfaces. This is expected since it is like exploring the space using only one eye, only by moving can you tell the distance between objects. A good way to deal with it is to guide the user through some movements (by utilizing storytelling for example) without explicitly telling the user to move.
One of the demos is a Tilt-Brush-like app. One of the testers reported that it’s way harder to use since the input is by touching and drawing on a 2D screen, where the drawn object is actually in 3D space. The lack of stereo vision also adds to the chaos. Users need visual clues to know what they are interacting with.
One of the testers have seen me attaching objects to places like the whiteboard or the ceiling and he wanted to try that too. The point cloud indicates the area is well tracked, but the user can’t really attach anything to it. I have to give him instructions to move a bit spatially to let the phone understand the surface it’s facing. This is also something that I need to be aware of when designing interactions utilizing this feature.
- How do you find the tracking?
So far so good. There are dropouts, but it recovers fairly quickly. But it’s really annoying when it happens, and users have no clue how to resume the tracking.