When humans first started to roam the earth, although I did not directly observe how they communicated, I am certain they did not use a keyboard. The early human most certainly communicated with sounds, gestures and facial expressions, similar to how we communicate now in those rare moments we are actually communicating face to face.
We now find ourselves sitting at a desk with our legs bent and our hamstrings tightening. We type on a keyboard with our fingers bent and our carpel tunnels collapsing under the weight of our arms. We make little sound or meaningful facial expressions while we interact with our computer other than the occasional gasp or cuss. What we do today is a far far cry from how our cave people ancestors first communicated. There was no typing for the first 43 thousand years we roamed the earth. The qwerty keyboard has only been around for about 150 years. So conservatively we have enslaved ourselves to a desk and keyboard for less than .4 percent of our existence on the earth.
Will our bodies evolve to be more suited to the technology we have developed to work with today’s technology? This evolution will certainly not occur in my lifetime. Maybe it’s time to completely rethink the user interface and how we interact with our computer instead.
Let me introduce the future user interface or FUI (pronounced fouy) for all of our acronym lovers out there. The FUI is a complete workstation including the seat. With the FUI, you sit down in a chair that offers lumbar support and extends your legs so that your hamstrings are not constantly bent and tightening. More of a lounge chair than an office chair. There is no desk but a large monitor/CPU mounted to the chair that is also a touch screen. The mount is of course adjustable and movable to suit to the individuals body dynamics (short arms/long arms, tall/short, near sighted/farsighted) Our hands and wrists rest on our lap or are waving around in space, rarely are they leaning on a desk. The FUI computer is at least 32 inches by 32 inches and is within arm’s reach. Inside one of the arm rests of the FUI chair is a wireless keyboard with touch pad. In the other arm of the desk is a fold out work surface/desktop that can work as a touch surface interface or a flat writing surface. The monitor has two quadraphonic speakers built in while the FUI chair has the other two speakers, with a subwoofer under the seat. There is an HD web cam and quadraphonic microphones mounted at the top of the monitor/CPU and in the chair head rest. There is a Microsoft Kinect like bar mounted at the bottom of the monitor that skeletally recognizes a person from the waist up including all the joints of the hand. The monitor/cpu has usb plugs, a dvd slot and audio/video input/output ports.
When you sit down at the FUI you login or authenticate through 2 of 5 possible factors. The choices include a pre-established voice phrase, facial recognition, a pre-established gesture (could be a hand gesture), a pre-established touch screen pattern and/or a password typed in either from a touch keyboard on the touch screen monitor/work surface or the wireless keyboard. You pick two of the above and the system logs you in and authenticates you. I would usually authenticate by just sitting down, get a facial recognition lock then say something like, “hello, it’s me”. Regardless of how you authenticate, the system will capture a few seconds of audio and video for every authentication attempt then also capture a few seconds of video and audio after successful authenticating. Let’s say I wanted to do some computing in the dark for some reason, maybe I want to watch a movie, in this case I might authenticate with my voice and touching some pre-defined spots on the touch screen. Now let’s say I have a bad case of laryngitis and a bee stung me in the face so I really don’t look like myself (talk about a bad day), in this case I might authenticate with a gesture and the touch screen. So why have a password at all? Because people are creatures of habit and the keyboard allows for the habitual fall back.
So once you are logged into the FUI computer, how are you going to interact with it? Life with the FUI is full of options. You can stay with the keyboard touch screen and communicate very unlike a cave person. Or you can return to your roots and communicate with voice, touch, gesture and sign language. Voice will be the way to create the written word. Gesture will be used to navigate content. Sign language could be used in place of voice when you want to create content quietly. You don’t know sign language you say. You did not come into this world knowing how to type on a keyboard either. Or perhaps lip reading could be translated into text.
For more creative, graphical content creation and interaction, navigation can be conducted with gestures and content manipulated with touch, gesture and the key board and touch pad are always an option. The idea of pages could still exist with the FUI but navigating and zooming in and out of one page could be much more extended similar to how Prezi.com, the zooming presentation editor works.
The possibilities for the FUI are endless. Isn’t it time to rise from our desks and our keyboards and return to communicating like we have for thousands of years, through talking and gestures. FUI for you and to you. Your thoughts and suggestions are welcome. Criticism should be directed to someone else (I really don’t care who, but someone else please).
Bye for now,