According to my understanding, “ZeroUI” is a concept that allows the computer to interact with the human body in a multi-dimensional way. That is to say, any information conveyed by our body (including sound, movements, vision and so on) is able to be accepted, processed and responded by the computer. In order to realize that, I think at least an information-receiver, a recognition system and a are necessities for a ZeroUI computer.
This concept is helpful when the interaction needs to involve more than hands and a touchscreen. In fact, what comes to my mind in the first place is the device that helps Stephen Hawking to communicate with the outside world. According to SwiftKey team (one of the producers of that device), Professor Hawking was “using a small sensor which is activated by a muscle in his cheek”. Detailed information can be found here:
Clearly, this device includes a sensor, which has been illustrated by SwiftKey team. Also, a strong database that records Professor Hawking’s word-using habit so that it can predict what he is going to say next.
The SwiftKey team have specialized the device for Professor Hawking, and it goes so successfully that the huge medical value of a similar innovation has been proved. There are many people who suffer from being unable to communicate with others conveniently due to physical disability, but with a “ZeroUI” device they may express themselves more efficiently.
Although this device is a great innovation, I still believe there remains much to improve. Anyway, Professor Hawking is still using the sensor to “type” out what he wants to say, but is it possible to enable different forms of expression? I believe this will be a good direction to expand this helpful innovation into the art field.