I see that Microsoft has come out with a coffee-table shaped computer, which represents a pretty major technological step forward in user interface (UI) design by allowing multiple simultaneous inputs directly to the display from more than one user, and even directly from objects placed on or in proximity to the screen. Not your average touchscreen. Popular Mechanics test-drives it (with video) in a fairly extensive report that includes an overview on how it actually works. As new and as groundbreaking as this is, I immediately knew I’d seen it somewhere else before. Anyone who regularly follows the TED Talks may recall this as well, as Jeff Han demonstrated the UI (link to video posted August 2006) in February of last year in an inspiring talk that showed off the technology. Although the technology simply “looks cool” (particularly as Han demonstrated it), the breakthrough is more significant than may be immediately apparent. The current price tag of $5-10,000 (USD) makes it seem prohibitive, but recalling the price tag on the first home computers in the 1980s and translating it to the dollars of 25 years later, it’s probably more appealing than it sounds even for home users. Of course, the price will plummet as adoption increases.
Calling it a “coffee table” immediately makes one think of the home market, where I can see a more satisfying form of an interactive chess game than what previous computer games have allowed. Not worth the money for most people, but this is but a small example. More significantly, I’m thinking of the possibilities for government or business applications. Don’t think of it as a coffee table. Imagine:
- a military strategy table showing a detailed map of a region, with model soldiers, tanks, and other “props” placed in the proper places over the map. As changes are made by moving soldiers or drawing an “X” over a bridge with a finger, the battlefield scenario is automatically remapped and fed back to the tabletop display. “What-if’s” go from an hour’s speculation and calculation to a minute’s calculated prediction.
- a map table in a large city’s civic planning department. As sprawl is predicted, traffic gridlock is modeled onscreen, showing bottlenecks. Model bridges are placed on the map, and overpasses are placed over traffic lights. A button is tapped and a finger draws a new route or traces an old one to be upgraded. Traffic patterns are re-plotted and displayed, with routes changing colour to denote traffic flow by time of day.
- a construction firm wishing to bid on road and bridge construction in a mountainous region. Inputing detailed GIS data and satellite imagery for display, the firm is able to model the road and bridge construction very quickly, make adjustments, and generate materials estimates, costs, and timelines. Having expedited the process this far and removed a significant number of contingencies, they are able to bid quickly and undercut their competition with greater certainty in the final pricing.
This type of UI could be particularly well-suited to running complex “what-if” scenarios as well as to rapidly-changing environments with multiple data inputs from automated and human sources. I’m thinking of air-traffic-control, disaster planning, electric power grid management, and many types of training simulations. As the technology moves toward home or recreational use, new strategy or virtual reality games may be developed to facilitate multiple players acting in real time rather than taking turns, with the entire field of play visible at once.
The technology exists today for many similar kinds of applications, but to my mind, it hasn’t yet been pressed as far as it can â€”and willâ€” be through the use of this type of interface.