This article is third in a series of articles on technology trends and their applicability to learning.
Think you can predict the future of learning technology? If you point to simulations and virtual reality, you’re wrong. Or at least half wrong.
Simulations will have their place. They’re getting more and more sophisticated and the popularity of such virtual universes as Second Life and There prove that people can be engaged by, even addicted to, experiences that are purely virtual.
But the real future of learning is what one developer calls “blended learning on steroids.” Just as training designers now choose among classroom-based training, synchronous online seminars, asynchronous Web-based training, and other options when determining the best delivery mechanism for learning content, in the near future those designing training will choose among training delivered in the real environment, via augmented reality, or in a virtual environment.
(These designations, along with a fourth—augmented virtuality—were laid out on a continuum in 1994 by Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino in their paper “Augmented Reality: A Class of Displays on the Reality-Virtuality Continuum.”)
The real environment is self-explanatory. Virtual reality, in the form of computer-based simulations, has been written about quite frequently in learning publications—but much less attention has been paid to the learning potential of augmented reality. This is no doubt because its more complicated technology has matured at a much slower rate. But research and hardware advances have helped propel AR forward, and it is poised to make a very big splash in the learning arena. It is AR, then, that we will examine here in more depth.
A definition and brief history
Augmented reality combines features of a virtual environment with the real world. Most often, the augmentation is visual, with a user sporting an eyepiece connected to a wearable computer and positioning equipment. By tracking where the user’s head is and what he is seeing, the computer is able to overlay graphics and/or text onto his vision.
This type of technology has been under development for more than 30 years, according to Scientific American, at such places as Harvard University, University of North Carolina at Chapel Hill, The University of Utah, the U.S. Air Force’s Armstrong Laboratory, and the NASA Ames Research Center.
The U.S. military has funded much of the AR research over the last decade, not only spending millions of dollars at their facilities but also at universities, says Sonny Kirkley, whose company Information in Place develops augmented reality solutions and has contracts with the Army, Air Force, and Coast Guard. He says that although the university research was not ostensibly for military use, “everybody knew the intent was, how is this going to apply in a military space?”
In the 1990s, the term augmented reality was coined at Boeing, Scientific American says, when scientists there developed a prototype solution to help workers put together wiring harnesses.
As computer technology improved, augmented reality developed more rapidly. In 2000, a project called MagicBook garnered excitement at the computer graphics SIGGRAPH conference. MagicBook could be read like normal text, but when viewed through a head-mounted display, animated 3D figures acted out the story. In his 2002 article “Augmented Reality and Education: Current Projects and the Potential for Classroom Learning,” Brett Shelton says the MagicBook project sparked people’s interest in the industrial applications of AR.
In the last few years, the definition of augmented reality has expanded as researchers have developed additional technologies. In addition to visual augmentation, AR can also encompass auditory augmentation (a computerized earpiece whispers information into a person’s ear), touch augmentation (also called haptic augmentation) or augmentation via a personal digital assistant (PDA). One researcher is even designing visual augmented reality that works with online learning. We will examine examples and learning applications for the various types of AR technology.
Because visual AR has been under development for much longer than other types, it is the furthest along in terms of practical application. Rather than being just prototypes, as many of these technologies are, visual AR solutions are actually implemented in some instances. For example, with technicians at Boeing and mechanics at the American Honda Motor Company. Both companies are providing schematics on “heads-up” displays, as they’re called, to repair and maintenance crews for real-time electronic performance support.
Honda is deploying Nomad Expert Technician Systems from Microvision, hands-free wearable displays that reflect images directly into the user’s eye, in their 12 U.S. training centers. The systems give the entry-level and experienced technicians who come for training access to online vehicle history and repair information without them having to take time away from the car to look at a separate computer display.
Nomad Expert Technician System. Photo courtesy of Microvision.
Microvision says their technology results in higher-quality work from less experienced technicians, and 30 to 40 percent efficiency gains measured in real-life trials. The devices cost about $4000 but the company says they can be paid back in less than 3 months and that a typical dealership adds $16,107 in gross profit per technician by using the devices.
In addition to Honda, some vocational technology colleges are using Microvision devices, also for automotive repair, as well as the military for maintenance training. We can imagine many other uses for this type of visual augmented performance support for workers who don’t sit at desks and would not normally have access to a computer, but who need to access written material or diagrams. Medical personnel and factory workers are just two groups who could benefit.
Accenture Technology Labs has developed a prototype auditory AR system called the Personal Awareness Assistant, a small wearable computer device that can record and transmit sound and contains two microphones, a small camera, and voice recognition software. One use for it is in networking. When a user says the words, “It’s nice to meet you, Jane,” the system records the name of the person and takes a picture. Then the assistant stores the audio, image, time and date stamp, and location in an address book for later retrieval. So asking, “Who was that woman I met last week at the holiday party?” will bring up the information.
The Personal Awareness Assistant could have further uses in performance support. Kelly Dempski, one of the researchers at Accenture, imagines that the system could, for example, warn workers at a chemical plant that they are entering an area that’s not safe for them because they lack the required safety gear.
Personal Awareness Assistant.
Photo courtesy of Accenture.
Dempski envisions that the system could increase its level of support based on user needs. It could whisper in the ear of a first-day employee and tell her where to report and when. After she’s been on the job for a while, it could provide suggestions on improving the quality of her work.
Accenture also foresees uses for the device for what the Website calls “collective intelligence.” The scenario: A user is in an important business meeting and a new customer asks a question that he doesn’t know the answer to. Via the device, colleagues who are situated elsewhere can give him the answer discreetly.