point
Menu
Magazines
Browse by year:
March - 2002 - issue > Cover Story
Virtual Dreams
Thursday, March 4, 2010
The initial vision of virtual reality (VR) was laid out by Ivan Sutherland in his classic 1965 lecture, “The Ultimate Display.” Then there was a lot of hype surrounding the concept of VR in the 1990s. Hollywood films, articles, and other media projected the idea that in a few years we might all be living in our own virtual worlds — for better or worse. As is usually the case with infant technologies, realizing the early dreams for VR and harnessing them for real work has taken longer than the initial hype had predicted. But finally it’s happening.

Indeed, simulating the astounding complexity of the real world is still a kind of “Holy Grail” for researchers working in computer graphics and remains a major area of computer science research. At the University of North Carolina, Chapel Hill, Professor Dinesh Manocha is part of one of the biggest academic groups dealing with virtual environments and interactive computer graphics. And though the days when you will feel what it’s like to stand on the top of Mount Everest just by putting on a VR helmet may be way off, Manocha reveals technology that is already having some impact now.

The computer gaming industry has been driving the graphics innovation over the last decade — at all levels. There have also been recent developments on the hardware front. Notably, graphics chips from innovative companies like nVidia (maker of the chip in Microsoft’s xBox and the Apple iMac) and ATI have brought graphics processing power to the everyday desktop PC previously available only in $10,000 Silicon Graphics workstations. Graphics processing power has been growing at what Manocha calls “Super- Moore’s Law.” Moreover, the cheap and powerful graphics processors can also be directly used for applications other than drawing pretty pictures, including simulation and scientific computation.

“Thanks to all of the kids playing games in the world, hardware has become almost free,” Manocha says. And that has implications in terms of the importance of 3D and simulation for business. For example, all engineers at a company like Boeing can make use of complex graphics-based software for their designs. Design and 3D modeling software has made significant strides. But what’s the next step?

This, principally, is where Manocha’s “GAMMA” and “Walkthrough” groups at UNC fit in. It’s one thing to develop 3D imagery for entertainment that makes you feel like you’re actually sailing on the Titanic. It’s quite another to be able to accurately simulate the operation of Newport News Shipbuilding’s Double Eagle oil tanker down to the last nut and bolt in its five million parts.

Tankers, power plants, nuclear submarines and so on are all billion-dollar systems that take hundreds of man-years to design. Manocha explains that General Dynamics’ Electric Boat Division might spend $100 million to build a physical mockup of a $2-billion nuclear submarine to make sure that they designed it right.

“How can we come up with computer graphics-based technology where we can eliminate these physical mockups?” says Manocha. “How can we create a synthetic environment where the user feels like they’re in the real environment?”

It’s not enough to have a pretty picture; an engineer also needs to be able to interact with the virtual model. This is the ultimate in interactive computer graphics. For Manocha, interactive means updates more than 20 times per second — like computer games and digital television, where there is no perceived delay. But to achieve this for an engineering model of a tanker is extremely challenging.

“Models of subs and power plants are way too complex for all the computing power in the world today,” he explains. So Manocha, Frederick P. Brooks and their group have been working to develop fast algorithms that will seek to enable the interactivity inherent in a virtual model. He admits that it could be a few years before the ultimate vision becomes a reality.

Some of the components of this technology, including collision detection, have been made available from their Web site. More than 6,000 users have downloaded the system and more than 35 commercial licenses have been issued. Commercial vendors of virtual prototyping systems, like MSC Working Knowledge, Holometric, Amrose, Mechanical Dynamics and SDRC have licensed this technology and incorporated it into their packages. Kawasaki, one of the major developers of industrial robots, has licensed the group’s Proximity Query Systems. ADAC Laboratories, world market share leader in nuclear medicine and radiation therapy planning systems, also licensed their Collision Detection Libraries and has incorporated the technology into its next generation of gamma camera for real-time motion control and collision avoidance.

Haptics

But making models of complex systems is not the only thing on the horizon for interactive computer graphics and virtual environments. Manocha points to a concept called “haptics,” which brings a sense of touch into the world of simulation. Some very primitive haptic devices are making their way onto the computer gaming scene — like a steering wheel controller that gives someone playing a car-racing game some physical sensation of the feedback the road might give — bumps, crashes, and so on.

At UNC, a haptic painting system has been developed by a team led by Ming C. Lin and Manocha that allows a much more sensory form of computer painting. Instead of using a mouse, the painter uses a $10,000-virtual stylus called a Phantom Device. Designed by Sensable Technologies, it gives the sensory feedback that a real brush might give. The software delivers the nuances of that sense of touch onto the screen, and the results are astounding (see paintings above, all drawn by amateur paintings). The Phantom, due to its heavy price tag, is obviously not making its way into every living room in America, but devices like it are clearly an indication of what’s to come.

In the medical world, according to Manocha, haptics has garnered a lot of attention in terms of its application to innovations such as minimally invasive robotic surgery. Or, for example, a doctor learning to carry out an endoscopic procedure could practice on a haptic simulation device without actually having to practice on a live patient. This is more the kind of virtual reality that has seen so much hype.

The Future
Manocha points to four major areas that need to be further developed to make the ultimate virtual environments a reality. The first is simply faster interactivity; the second is better interaction and haptics technologies; the third is the legendary task of all computer graphics — making it look like real life; and the fourth is dealing with real-world complexity.

Manocha and his group continue their quest, with a focus less on making humans feel like they’re living in a virtual world, and more on allowing engineers to understand systems before they are built. Others are also tackling similar problems, including major groups within large corporations, like Boeing, Microsoft and leading automobile manufacturers like BMW and Ford.

The future could promise a level of computing and graphics advancement that would essentially allow somebody to test drive the latest car model before it has even hit the production line. Just don’t count on it any time soon.
si

Twitter
Share on LinkedIn
facebook