The Other side of the Design Interaction Story
For computer systems, their user interface is a critical component that determines whether the system is effective or obsolete. Equally important but sometimes neglected are the Interaction Devices, which actually enable the users to interact with the system.
Often, user interface design is defined/limited by the interaction device available. We believe that in general, as the flexibility of the interaction device increases, the complexity and clutter of the corresponding user interface decreases.
In this post, we discuss both input and output devices and the many forms they come in, how user interface design has been influenced and what the future may bring.
One of the two primary input devices of the computer, the Keyboard traces its lineage back to the typewriter invented in the 1860s. There are many variants in different sizes and shapes, but they all serve the purpose of letting users type words. We will go out on a limb here and say that the keyboard actually has a flawed interaction design.
Try and recall when you first saw a keyboard. Did you know what ‘F1-F12’, ‘Ctrl’, ‘Alt’ and pretty much half the keys represented? Most of these keys are defined by convention and we believe that is one of the hindrances for the IT-illiterate folks.
For the alphabet keys, they are actually designed to hinder typing speed. The QWERTY layout is inherited from the typewriter ancestors, which stamped letters on paper one by one. The typewriter would fail and smudge ink if the users typed too fast, a limitation that keyboards have evolved out of. However, due to widespread usage and training, the number of keyboard users has a reached a critical mass where it is extremely difficult to adopt a newer and more efficient keyboard layout.
Once upon a time, ‘quick typing’ was slow
Despite its flaws, keyboards do serve their purpose very well, which is validated by their continued existence and popularity, even ‘ascending’ to touch screen devices.
As a result of the layout of the keyboard, certain interfaces have been designed to work in such a way that the keys on the keyboard are utilized in an efficient manner. One such example would be the interface of first-person shooters (FPS). In most FPS, if not all, that use the keyboard as the primary input device, the first thing that comes to the mind of most gamers would be the W-A-S-D layout to move around in the game. The way the keyboard has been designed has such a heavy influence on the gaming industry that such a layout has become a common practice.
Shortcut keys have been made in such a way that it conveniences users to go about doing their daily tasks on the computer. Shortcut inputs such as ctrl+c, ctrl+v, etc. From this example, we can clearly see how the layout of the keyboard has affected the way developers go about assigning these shortcut keys. As a conclusion for the keyboard, we can clearly see that its design, especially the layout of the keyboard has significantly influenced many generations of interface designs.
The other primary input device for the computer and vital cornerstone of the WIMP interface type, the mouse was developed in the 1970s. Contrary to the keyboard, we consider the mouse as an example of good interaction design.
The concept of the mouse is simple and effective: move the mouse to move the cursor. Even children understand how to use the mouse after watching the white arrow flow and spin around along with the mouse.
The design of the mouse has seen little change since their invention since they serve their function very effectively. The addition of the scroll button has more or less completed the design of the mouse. The design is so efficient in such a way that there has been little, or no need for any change in order to improve it.
Once again, the mouse has affected the interface designs to such an extent that simple tasks can be done just with a few clicks. Instead of having to type in long inputs for some commands, having a mouse has made it easier for developers to just simply create a button on their interface to make the command as easy as a click-and-go.
Improvements to the mouse, especially in the gaming industry, has allowed for even more simplified interface designs. This is achieved by having additional buttons on the mouse which can be configured and customized to suit the users’ individual needs.
Of course, sometimes having more buttons does not mean its good. It is more of the practicality of the mouse itself, and whether an additional button really serves any purpose at all. This is greatly illustrated by the two pictures below. A comparison between the Razer Ouroboros versus the product that Logitech would rather not mention.
What have you done?
The advantages of additional buttons
The most unintuitive part of the mouse as an interaction device to some people (myself included) is the name itself.
The mouse is so important to the computer as an interaction device that its hard to imagine what it would be like without the mouse. A lot of human-computer interactions would not have been possible without the invention of the mouse.
To end off the overview of the mouse, we would like to dedicate our appreciation of the efforts of Douglas Engelbart who has invented such an evolutionary device for the computer. Sadly, he has never received any royalties for his invention.
Douglas Engelbart – Inventor of the mouse
The current hype for computer interaction devices would be the touch-screen. Fun fact: the touch-screen technology has been around for quite some time. In fact, the touch-screen was first pioneered in the 1970s, just about ten years younger than the mouse. So, the question is, why now?
In the past, touch-screen was considered obselete as it was much more expensive to implement a touch-screen desktop. Moreover, having a touch-screen interface back then didn’t have a significant improvement on user experience from using a keyboard or a mouse.
The introduction of the iPod touch into the market can be considered as a major milestone for touch-screen devices. It has opened up the users to a whole new perspective as to how touch-screen can be utilized to maximize user experience. Games have been made more interactive where users can directly control the in-game characters with their hands. Developers have designed their user interfaces -to fully utilize the capabilities of touch-screen.
Touch-screen has also made it possible for users to input characters by simply “writing” with their fingers on the touch-screen devices. It has thus opened up a whole new horizon of possibilities for future touch-screen products.
The sheer amount of competitors in the touch-screen industry (Apple, Samsung, Windows, etc) has allowed the touch-screen technology to improve at a much faster rate than in the 70s – 90s. This has resulted in lower manufacturing costs for touch-screen capable devices, and thus we are seeing more devices having touch-screen.
The touch-screen has such a significant influence on human-computer interaction that sometimes, we would find ourselves touching the screens of various devices and only to be surprised to find that the device is not touch-screen. It is not hard to imagine a future where the touch-screen would totally take over the roles of the keyboard and the mouse.
With the upcoming windows 8 desktop prototypes which demonstrated how the touch-screen can replace the functions of keyboard and mouse, the effectiveness of the touch-screen has evolved a whole lot from what it was in the early days. As the companies continue to develop devices for touch-screen devices, it would definitely affect how interfaces would be designed. Afterall, interfaces are greatly affected by the nature of the input device.
Future of touch-screen? -from the Minority Report-
Motion-sensing input devices
As we look further into the future of interactive input devices, the general trend that the industry is heading towards is that of contact-less devices. From the Xbox Kinect to the Playstation Move, and not forgetting Wii’s WiiMote, there is a wide variety of ways to go about implementing a motion-sensing input device.
The invention of motion-sensing devices has allowed unique interfaces to be developed. Users can now use their actions to directly affect the interface. A prime example of such an interface is the WiiSports, where user actions can be directly sensed and reconstructed in the gaming world. The motion is captured by the WiiMote, where there is a gyroscope and accelerometer in it to capture user movement. This allows for newer interfaces that can be designed to be more life-like.
The Xbox Kinect on the other hand, is able to read movements of users directly off the camera and use the resulting data to control the movements of in-game characters. Compared to the WiiMote, this allows for more degrees of freedom, and thus its even more accurate to the movements in real life.
The PlayStation Move is more a hybrid of both the WiiMote and the Xbox Kinect. By using a camera to capture the movement of the sphere at the tip of a wand-like controller, user actions can directly affect on-screen movements.
Motion sensing opens up another horizon of possible interfaces that are slightly different from what is offered by touch-screen devices. It would be interesting to see which technology would be dominant, or perhaps, a hybrid of both could be possible. No matter which technology turns out to be the dominant one, it would definitely greatly affect the way interfaces would be designed in the future.
Monitors have been around since 1980s, when they were mainly CRT monitors. Over the years, newer technology has been introduced to enhance the visual effects that can be projected to the users.
From CRT to Liquid Crystal, to OLED monitors, then to HD, followed by 3D, the monitor has had a very fast development rate over the past few years.
Once again, the changes in the monitor affect the way in which designers design their interfaces. Better graphics and a larger array of colours enable designers to add even more things onto their interfaces. HD allows designers to go into the very fine details, while 3D allows a perception of depth when it comes to interface design.
As an output device, the monitor captures the visual attention of the users, which is one of the major interaction between the user and the computer. Therefore, any improvements in technology of the monitor would definitely affect the way in which designers would go about designing their interfaces.
Speakers, on the other hand, capture another component of the human’s five senses, that is, of course, the hearing. Compared to the monitor, there has not been much changes or improvements to the way speakers are made. This might be partially due to the fact that there is only a very short range of frequencies that the human ear can perceive. This means that any improvements to the speaker would most likely not make a difference to the user.
Then again, interfaces depend heavily on the way sound can be output from the speakers. Having a speaker means the computer can send sound notifications to the user and is widely used in all applications and interface designs.
What the future may bring
When we consider the general direction interaction devices have evolved, a interesting and somewhat ironic trend is that they are evolving themselves out of existence. The mouse took over some of the functionality of keyboards by allowing WIMP interfaces to replace command line interfaces. The touch screen entirely replaced the pointer concept. Perhaps as we progress further, interaction devices will shrink to non-existence and we end up using the good ol hands and feet again. Or maybe even less. (see video)
Imagine playing games with that.