“The vision of the future of learning is to be able to get just the right stuff to just the right person at just the right time and place in just the right way and with just the right context on just the right device and through just the right medium.”
— Wayne Hodgins
The dream is clear: In a ubiquitous computing world, we become a hybrid between human and machine, with the system acting as an intelligent partner to provide us appropriate content at a particular moment. With the perfect memory and detailed information processing capacity of a computer, and the pattern matching and judgment capability of the individual, together the whole is greater than the sum of the parts.
One component of that is the ability to deliver information as Wayne Hodgins has suggested: customizing the information to the need. Whether communicated to the individual through PDA-sized screens, on a cell phone’s audio channel, through augmented-reality goggles, through holographic projection, or directly jacked into our neural system, we want to deliver information customized to the situation. And, with more on-demand production, shorter product cycles, greater mobility, and pressures to reduce training, we can’t assume that we can pre-load the necessary information into either human or device.
We want to deliver the right information, and ideally not require the worker to spend time looking it up. By delivering personalized, customized, contextualized information, we can make people more effective in their tasks, and more effective over time. This isn’t just a dream; this is doable, as our limits are no longer technological. No, our barriers are organizational, including the need to adhere to standards, the commitment of resources, and so forth. More important, we also need a clear set of models that detail what the right person, time, place, way, context, device, medium, and so on actually means.
To deliver this content, we need to indicate who the user is, what the context is, and what content is available. There isn’t necessarily a clear demarcation around all this. For instance, is the device a part of the user model or context model?
However, we need to put a stake in the ground; we need a first cut that illustrates the issues and gives us a straw man to drive discussion and progress. Here are four key models that paint a rich picture of the knowledge we need: a content model, a user model, a context model, and a task model (see Figure 1).
Before elaborating on each models, we need to understand the system that they contribute to, as well as a way to use these models as part of an environment that can choose what information to deliver. In a very general sense, these models provide the information needed by a central learning engine that uses the current information about the situation, and the information from these models, to pull the appropriate content from a content repository to deliver to the learner (see Figure 2).
In this system, some action, whether by the user (moving to a new location) or by the environment (a device that notifies a changed status), updates the context data that is sent to the system. This context could include such things as time, status, location, and user. The engine uses the models to decide what content would make sense to deliver in this context, and specifies the content to be made available. To do that, we need to be specific about the information we use to make that decision of content to be sent; we need to drill into the models.
To choose the right content, we need a model of the content available. More specifically, we need a model of the types of content available, as well as the ways to access content. To that end, there are three major categories of information that constitute a content model:
- the different components of information
- the metadata that we tag the information with to identify it
- the standards that the content conforms to (see Figure 3).
The Informational Components describe what content types are available, in terms of their semantic roles. For instance, sets of repair procedures would be of different informational types than customer sales objections job aids. Increasingly, we should be moving towards single-sourcing content, in which we develop content once that encompasses all the projected needs, in awareness of all the potential consumers.
This is in opposition to the current model in which marketing presentations are re-written time and again into engineering requirements, sales training, user training, tech support, and other forms of help. When we know who the consumers and what the needs are, we can articulate a content model, a structured template detailing what and how to write information that can be transformed through eXtensible Markup Language (XML), and style-sheets into the specific content needed.
We also can specify the standards that we recognize. These might be the Standard Courseware Object Reference Model (SCORM) terms used for learning objects, but they could also be other standard document formats. If we know we have a Portable Document Format (PDF) file, versus a Flash file, we might know whether it can be delivered on the person’s watch (yes, Swatch and Microsoft have both promised or delivered information appliances in the form-factor, and with the dual role, of a watch) or on their tablet PC.
And, finally, we will need more information than just the type of information and the format. We’ll need to know what the information is about, the semantic content that is encapsulated. Using standardized labels (e.g., Dublin Core metadata), controlled vocabularies (ontologies), and other necessary identification, we can know how to specify specific bits of content. We’ll also want to know what sort of presentation it is, what media it contains, and, importantly, what it contains in terms of knowledge. Ideally, the content will be in small granularity, and aggregated into larger chunks but accessible (and tagged) at the smallest level.
Note that this model does not ignore social possibilities. Information components might be chat sessions with mentors, experts, or collaborators.
To take advantage of this understanding of content and media, we need to also understand the user. We need to know a number of characteristics about users, including what we know about what they know, their characteristics as learners, and their preferences for information (see Figure 4).
With learning management systems, and also through some of the knowledge management tools that analyze documents and emails to identify expertise, we can have rich picture of what learners know. Moreover, as we deliver content we can update that the user has seen this bit of knowledge, and can update their knowledge (perhaps not completely, only partially, modeling their knowledge acquisition at a finer level of granularity).
We also have the potential to recognize that by adding extra knowledge on top of the information we deliver, we can make it a complete learning experience rather than just a performance opportunity. The system should know when it should be considered a performance need, and providing only the minimal information to get a person past this obstacle, and when people should be being moved along a learning path, providing instructional structure around the task.
We also want to know about this individual, what particular learning style and strengths they have. For immediate needs, we might deliver in the most efficient manner, but we have alternatives. If it’s not an immediate need, we might deliver in a challenging way and develop the learner’s strengths over time. Also, if we don’t have optimal content (ideally matched to the individual’s capabilities), we might be able to provide non-optimal content and some separately developed support materials. We have a real opportunity to not only meet immediate needs, but develop our learners over time.
Finally, we need to know the individual’s preferences. If an individual prefers an MMS (Multimedia Message Service) message or receiving information via email instead of being pointed to a web portal, we should be able to accommodate. We may also opt to provide pull instead of push opportunities, opt to act on only some of the information we might have about a user (protecting privacy preferences), and consider alternatives based upon different situations details.
We also want to know the task that is being performed. This is part of the context, but can be consistent across different locations, and the same location can have different relevant tasks, so I chose to create a separate category. As part of the task model, we should consider the relevant roles that an individual might have and the responsibility level in that role, the procedures applicable, and phases of the process that an individual might be in (see Figure 5).
In Education Technology magazine, I argued that there are different phases of the process, including information access, problem-solving, reflection, as well as different information and social needs at each phase. So, when we first encounter a problem we’re looking for an answer. If we don’t find it, we’ll have to go into problem solving. If that works, then we should capture that knowledge.
We also need to know the role and responsibility of the individual. Different roles in the same location could be vastly different such as repair or quality control on the repair. Similarly, given a particular condition at a specific location, certain levels of responsibility might have different appropriate procedures than others (a fire chief might be able to make judgment calls that a police officer might not at the same situation). Obviously, for any task, there are associated procedures, and perhaps integrating procedures that sequence the component procedures. Information appropriate for an individual in a particular location would depend on an assessment or indication of the task, and that may depend on outcomes that change the context.
The context itself is the final element that helps decide the particular information necessary. Into this figure the time and place, as well as the particular resources available at this juncture (see Figure 6).
We of course need to know where the individual is. If they’re at their desk, or on the road, or at a client site, or in an airport, we’ll have different constraints. Similarly, we need to know when it is; is it late, early, is there an imminent flight? For instance, the tasks done in a medical lab change during the day, when it’s focused on processing samples, versus nighttime when preventative maintenance gets done. We also need to know what is available in the area. Is there a display device, an input device, an electric plug, or particular tools? All these could affect what might be possible, and consequently what information to bring.
When we integrate these models into the overall learning system, we see a rich picture of the necessary knowledge to start delivering the dream (Figure 7).
This whole system is a necessary, but by no means sufficient, component to creating a true symbiosis of individual and technology augment. There’s more necessary in terms of creating unified standards for all these models, in defining the necessary intelligence—rules—that use the models, and in providing tools, as well as content for the learner.
This isn’t the final answer, but it’s a first stab at a cohesive and comprehensive set of information that would help us deliver on the dream of personalized information. We now can do it, and we should.
Published: August 2005