The Future User Interface

When humans first started to roam the earth, although I did not directly observe how they communicated, I am certain they did not use a keyboard. The early human most certainly communicated with sounds, gestures and facial expressions, similar to how we communicate now in those rare moments we are actually communicating face to face.

We now find ourselves sitting at a desk with our legs bent and our hamstrings tightening. We type on a keyboard with our fingers bent and our carpel tunnels collapsing under the weight of our arms. We make little sound or meaningful facial expressions while we interact with our computer other than the occasional gasp or cuss. What we do today is a far far cry from how our cave people ancestors first communicated. There was no typing for the first 43 thousand years we roamed the earth.  The qwerty keyboard has only been around for about 150 years. So conservatively we have enslaved ourselves to a desk and keyboard for less than .4 percent of our existence on the earth.

Will our bodies evolve to be more suited to the technology we have developed to work with today’s technology?  This evolution will certainly not occur in my lifetime. Maybe it’s time to completely rethink the user interface and how we interact with our computer instead.

Let me introduce the future user interface or FUI (pronounced fouy) for all of our acronym lovers out there. The FUI is a complete workstation including the seat.  With the FUI, you sit down in a chair that offers lumbar support and extends your legs so that your hamstrings are not constantly bent and tightening.  More of a lounge chair than an office chair. There is no desk but a large monitor/CPU mounted to the chair that is also a touch screen.  The mount is of course adjustable and movable to suit to the individuals body dynamics (short arms/long arms, tall/short,  near sighted/farsighted) Our hands and wrists rest on our lap or are waving around in space, rarely are they leaning on a desk. The FUI computer is at least 32 inches by 32 inches and is within arm’s reach. Inside one of the arm rests of the FUI chair is a wireless keyboard with touch pad.  In the other arm of the desk is a fold out work surface/desktop that can work as a touch surface interface or a flat writing surface.  The monitor has two quadraphonic speakers built in while the FUI chair has the other two speakers, with a subwoofer under the seat. There is an HD web cam and quadraphonic microphones mounted at the top of the monitor/CPU and in the chair head rest.  There is a Microsoft Kinect like bar mounted at the bottom of the monitor that skeletally recognizes a person from the waist up including all the joints of the hand. The monitor/cpu has usb plugs, a dvd slot and audio/video input/output ports.

When you sit down at the FUI you login or authenticate through 2 of 5 possible factors.  The choices include a pre-established voice phrase, facial recognition, a pre-established gesture (could be a hand gesture), a pre-established touch screen pattern and/or a password typed in either from a touch keyboard on the touch screen monitor/work surface or the wireless keyboard. You pick two of the above and the system logs you in and authenticates you.  I would usually authenticate by just sitting down, get a facial recognition lock then say something like, “hello, it’s me”. Regardless of how you authenticate, the system will capture a few seconds of audio and video for every authentication attempt then also capture a few seconds of video and audio after successful authenticating.  Let’s say I wanted to do some computing in the dark for some reason, maybe I want to watch a movie, in this case I might authenticate with my voice and touching some pre-defined spots on the touch screen. Now let’s say I have a bad case of laryngitis and a bee stung me in the face so I really don’t look like myself (talk about a bad day), in this case I might authenticate with a gesture and the touch screen.  So why have a password at all?  Because people are creatures of habit and the keyboard allows for the habitual fall back.

So once you are logged into the FUI computer, how are you going to interact with it?  Life with the FUI is full of options.  You can stay with the keyboard touch screen and communicate very unlike a cave person. Or you can return to your roots and communicate with voice, touch, gesture and sign language. Voice will be the way to create the written word.  Gesture will be used to navigate content.  Sign language could be used in place of voice when you want to create content quietly.  You don’t know sign language you say.  You did not come into this world knowing how to type on a keyboard either. Or perhaps lip reading could be translated into text.

For more creative, graphical content creation and interaction, navigation can be conducted with gestures and content manipulated with touch, gesture and the key board and touch pad are always an option. The idea of pages could still exist with the FUI but navigating and zooming in and out of one page could be much more extended similar to how, the zooming presentation editor works.

The possibilities for the FUI are endless.  Isn’t it time to rise from our desks and our keyboards and return to communicating like we have for thousands of years, through talking and gestures. FUI for you and to you.  Your thoughts and suggestions are welcome.  Criticism should be directed to someone else (I really don’t care who, but someone else please).

Bye for now,


This entry was posted in Predictions. Bookmark the permalink.

2 Responses to The Future User Interface

  1. Zack says:

    I don’t think that voice-activated technology is ever going to replace the written word – it’s evident that people simply prefer written communication, on both ends. The explosion in SMS/texting usage that occurred well after cellphones were widely available is evidence of that. I’m hoping that ASETNIOP (a typing method based on ten input points; i.e. ten fingertips) will catch on, which is a lot more like the gesture-based system you’re describing.

  2. Jeremy says:

    Hi there, I have a few comments and observations. It seems that you describe more the chair and desktop setup, which is not generally considered as «the user interface». Though is sounds cool, it also sounds very expensive.
    Most of what you describe as hardware components and login methods exists already. You may want to go and find out what exists to make it “real”, or maybe build the machine ! You can make a large screen with a projector (300$) add a Kinect to detect the touch gestures (100$), buy a wireless keyboard (30$) and tactile mouse (50$) and add your 5.1 sound (50$).

    The whole interaction part you describe is still science fiction though. The detection of sign language or lip reading will not be available before many years (about 10 to 15 minimum) if it ever comes. I also have strong objections about lack of love of the keyboard input. I understand you type with a QWERTY keyboard, that may be why you don’t like it… Most people just take the keyboard and never learn to type. Though typing can be fulfilling experience if it is a quick as your thought (or faster) and comfortable to use. Just have a look at

    You will understand that, if you want to create and design new ways to interact with computers, you have to rethink everything. Typing is hard to learn, and I still don’t understand why there not that much 1 hand keyboards, especially for wireless keyboards.

    What you describe is a really “near” future, I personally dream and work on getting rid of the computer by having complete computer programs and operating systems in spatial augmented reality. It already works, it just need a few decades to get out of the lab.
    (Like Omintouch : )
    (Or like this for drawing:

    This said, remember that if we still use mice and keyboards, it is because they are the best for their use (typing and fast and accurate pointing).

Leave a Reply

Your email address will not be published. Required fields are marked *