国际原文

How to Be an Educated E-Learning Consumer(下)

How to Be an Educated E-Learning Consumer(上)

E-Performance at Work: eFollowUp(下)

E-Performance at Work: eFollowUp(上)

Beneath the Tip of the Iceberg: Technology Plumbs the Affective Learning Domain(下)

Beneath the Tip of the Iceberg: Technology Plumbs the Affective Learning Domain(上)

Advanced Learning Environment for the Aerospace Industry(下)

Advanced Learning Environment for the Aerospace Industry(上)

We Learning: Social Software and E-Learning, Part II(下)

We Learning: Social Software and E-Learning, Part II(上)



LoD Survey: Quality and Effectiveness of E-Learning


How do e-learning practitioners rate the quality and effectiveness of their e-learning programs? Where are the soft spots among e-learning technologies used by practitioners? How do perceptions of quality and effectiveness of e-learning vary by region?

These questions, the subject of much debate as adoption of technology-enabled learning moves from novelty to common business practice, are the focus of a recent study conducted by the Learning on Demand (LOD) program of SRI Consulting Business Intelligence. The 24-question, Web-based survey, conducted with the participation of ASTD and other LoD partners, garnered nearly 350 responses from practitioners around the globe. Following is an overview of key findings from the survey. (A PDF file containing the full results of the survey is available at the LoD Website at .)

Demographics

Before examining responses, a brief look at the types of professionals who completed the survey and the location and nature of their organizations is in order. illustrates the positions held by survey respondents. Corporate training/e-learning practitioners, including trainers, training managers and instructional designers, accounted for just under two-thirds (64%) of survey respondents, with training/e-learning managers comprising the largest single category of respondent (42%). Just under one-fifth of respondents (19%) indicated they work for e-learning providers. Among the 17 percent who indicated “other” for their job role, consultants and business development managers were among the most frequent roles cited. Researchers, marketing folks, and one self-described “pundit” were also among roles cited in the “other” category.

In terms of the types of organizations represented in the survey, shows a wide distribution in categories. Professional service providers represent the largest single category with 19 percent, followed by 17 percent in the category of IT firms, a reflection of the number of e-learning developers among respondents. A surprisingly large percentage of non-profit organizations (7%) are represented among respondents, while 10 percent of respondents hail from manufacturing firms, a somewhat smaller percentage than anticipated. Eight percent of respondents indicated they work for governmental bodies.

A little more than two-thirds of respondents (65%) work in the United States or Canada (). Collectively, Europe (including the United Kingdom and Ireland) accounts for 14 percent of respondents, and 16 percent of respondents hail from the Asia/Pacific region, which includes India and Australia. Among the 5 percent of countries represented in the "other” category, the majority represent Latin America respondents.

A broad diversity in organization size is reflected by survey respondents, as shown in . While 50 percent or respondents work in organizations of 500 or more employees, an equal percentage work in organizations of less than 500 employees. The bulk of those reporting from organizations of 49 or fewer employees indicated they work for consulting providers or selected “other” for their type of employer.


Use of e-learning

Survey respondents were asked a series of questions to gauge their use of e-learning technologies and services. As shown in Figure 5, fully two-fifths of respondents (41%) are using e-learning enterprise-wide, while 28 percent report using e-learning in business unit initiatives. Some 30 percent say they’re in the initial phases of e-learning adoption. And one-fifth of the respondents are using more sophisticated e-learning technologies including LCMS and learning objects. (.)

shows the types of e-learning used by respondents. Most common was the use of custom or internally developed content (72%), followed by use of off-the-shelf content (50%) and internal content authoring tools (48%). Some 42 percent of respondents report use of LMS systems in their organization, and a surprising 39 percent make use of synchronous virtual classroom platforms. A bit less than a third of respondents (29%) say that hosted e-learning platforms are used in their organization, reflecting the significant inroads made by these providers in the e-learning market in the past two years. Also surprising was the percentage of respondents who report use of streaming media platforms—nearly one-fifth (18%) indicated their usage.

These findings, especially the wide variety of technologies being harnessed to provide technology-enabled learning, reflect continued growth in the sophistication of e-learning adopters as the field matures.

Quality findings

Asked to provide an overall rating of their e-learning initiatives, a solid majority of respondents (60%) said they consider their programs to be “good” or “excellent.” Guidelines for rating overall quality were included in the question (). A third of respondents indicated their program is “fair” and only 6 percent consider it to be “poor.”

However, significant differences emerge when ratings of this overall indicator are compared among different regions. shows a regional breakdown of responses: United States/Canada, Europe (including the United Kingdom and Ireland), and Asia/Pacific (including India, Pakistan and Australia). U.S. and Canadian respondents had the highest opinion of their e-learning efforts, with nearly two-thirds (62%) rating their program as “good” or “excellent.” A significantly smaller portion of European respondents (47%) rated their program “good” or “excellent,” with a majority (51%) rating their program as only “fair.” And Asia/Pacific respondents had the least favorable view of their e-learning initiatives; only 3 percent of Asia/Pacific respondents rated their e-learning initiative “excellent,” which equates to one-eighth the percentage of U.S. respondents who hold that view. Meanwhile, twelve percent of Asia/Pacific respondents view their e-learning program as “poor.” Although cultural factors likely play a role in ratings of this overall indicator, the survey suggests there are very different views of overall quality of e-learning among these three regions. This supports anecdotal assessments of e-learning from practitioners and industry observers in those regions.

Exactly how respondents define quality yielded the results found in . Respondents were allowed to select more than one of the options; and four-fifths (81%) saw “learning effectiveness” as an appropriate indicator of quality. “Learner experience” (62%) and “cost efficiency” (60%) are neck-and-neck as quality indicators. Among responses for “other,” accessibility of e-learning appeared most frequently.

Respondents were then asked to rate the quality of various e-learning components used in their programs; responses are shown in . Content emerged with the highest quality marks—86 percent rated their content as medium- or high-quality. This finding comes as a surprise, given the hand-wringing over content quality found in many online forums and industry conferences. The question doesn’t distinguish between internally developed and off-the-shelf content, which is one possible explanation for the discrepancy. LMS systems received the next highest quality rating, though only 22 percent rated their LMS as “high quality” and a nearly equal percentage (23%) gave them a “poor quality” rating. One surprising, but indirect, finding is the percentage of people who indicated their use of synchronous, LCMS, and skills-management tools. Nearly two-thirds (61%) of respondents indicated they use synchronous tools, and f 72 percent say they use LCMS platforms. Of those users, a robust 73 percent rated their synchronous tools as medium- or high-quality, and 70 percent said their LCMS system was medium-quality or higher. Skills management tools were the lowest regarded e-learning tools, with 40 percent of users of such tools rating theirs as “poor quality.”

E-learning content—variously described as courses, modules, or learning objects—has been the subject of much discussion in the debate over quality. Respondents were asked to rate the quality of various types of e-learning content, the results of which are depicted in Figure 11. Not surprisingly, internally developed content received the highest marks, with 71 percent of respondents rating such content medium- or high-quality. However, a far more unexpected finding is the higher quality ratings that off-the-shelf content garnered over custom-developed content (defined in the survey as content developed by outside consultants). While 59 percent viewed their off-the-shelf content and medium-quality or higher, less than half (48%) had similar regard for their custom-developed content.

Ratings for other types of content, including simulation and game-based content and performance-support oriented content reflect the early stages of use of these types of content in respondents’ organizations—roughly half of respondents indicated that they do not yet use these content varieties. But among those that do, simulation-based content received the highest marks. Among the 53 percent of respondents that use simulation-based content, 31 percent consider it to be high-quality while another 57 percent rate it as medium-quality. Only 11 percent described such content as poor quality. Virtual classroom sessions, in which content is conveyed in an instructor-led online environment, also received high marks from the 53 percent of respondents that use such platforms. Of them, 26 percent rated their virtual classrooms as high-quality and another 55 percent rated them as medium quality. (.)

Where respondents feel improvement is most needed in off-the-shelf e-learning content is depicted in . Asked to rate each of seven aspects of content in terms of its need for improvement, high percentages of respondents selected personalization, interactivity, and assessment/testing components as areas with some or high needs for improvement. Content navigation garnered the highest percentage of “no improvement needed” responses at 18.5 percent. However, 61.5 percent felt that some improvement was needed in content navigation. Nearly one-fifth of respondents (18%) felt no improvement was needed in content’s “relevancy to business,” while 44 percent felt that some improvement was needed in aligning content with business goals and 38 percent see high need for improvement in this regard.

How respondents view the influence of various external factors on the quality of their e-learning programs—some of which are beyond their direct control—is shown in . A majority (55%) cited continuing funding for e-learning as a positive impact, an indication that those with programs underway are finding continuing support. Almost as many people (48%) cite support from their IT department as a positive influence on their program. Quality of end-user PC’s and availability of bandwidth to support e-learning are two areas that garnered the largest percentages of “negative impact”—31 percent each.

Gauging effectiveness

Whether and how e-learning practitioners gauge the effectiveness of their program was measured in a final series of survey questions. As shown in , just under half (49%) of respondents say they have developed quantifiable measures of the effectiveness of their program. Another 15 percent say they have made efforts to gauge effectiveness in quantifiable terms but have not succeeded thus far. More than a quarter of respondents (28%) say they have not pursued such efforts.

How respondents have gone about gauging effectiveness is depicted in . The two most commonly used approaches are tracking the number of learners trained, used by 43 percent of respondents that have developed effectiveness measures, and calculating cost savings of e-learning over traditional instructor-led classroom training (38%). Some 35 percent also rely on learner assessments as an effectiveness indicator, and 32 percent make use of learner self-reporting on their use of e-learning. Just over one-third of respondents (34%) say they have engaged in a formal ROI analysis of their e-learning. Interestingly, over a quarter (27%) measure the effectiveness of their program in terms of customer satisfaction, perhaps an indication of efforts to tie e-learning with broader business objectives.

The influence of various factors on the perceived effectiveness of respondents’ e-learning programs is depicted in . Support from senior staff for e-learning initiatives emerged as having the most beneficial impact on program effectiveness, with 52 percent of respondents indicating such support “greatly benefits” their program. Internal marketing of e-learning to employees was also widely viewed as beneficial, with 75 percent of respondents saying such coordination “benefits” or “greatly benefits” their program. And equal percentages of respondents (70%) cited blending e-learning and classroom instruction as well as providing time for employees to use e-learning as a beneficial aspect of their program.

In a follow-up question, survey participants were asked to identify two factors from the same list that they feel have had the most significant impact on their program (s). Support for the program from the CEO and other “CXOs” had top billing, with 51 percent of respondents, followed by blending of e-learning and training (35%) and internal marketing (32%).

What would most help e-learning practitioners improve their efforts was probed in several questions. shows respondents’ selections among eight alternatives for winning the greatest gains in effectiveness. Though none of the eight alternatives stands out glaringly, developing more customized content, either internally or through outside contractors, was viewed by the largest percentage (39%) as a first or second choice for improving effectiveness. Some 35 percent selected “greater personalization” as a first or second choice, and 34 percent named “high-level support for e-learning” among their picks. Less than one-fifth (19%) named “greater integration/interoperability” among their top two choices, a surprising finding given the growing attention to integration issues at industry trade shows and in online discussions. One possible explanation is that only the most sophisticated of e-learning adopters—those seeking to integrate their e-learning with enterprise applications—are finding integration/interoperability to be a priority.

Some interesting regional differences are evident in responses to what would most improve e-learning, as shown in . Europeans see improvements in content, including more engaging off-the-shelf content and more custom developed content, as somewhat more beneficial than the American viewpoint, while Asian respondents are more inclined to value synchronous tools as most beneficial. At the same time, Asians see “greater personalization” as more significant than Americans. Asians also put much less value on “high-level support for e-learning,” perhaps it’s an indication that such support isn’t lacking.

The same question was turned on its head, asking survey participants to choose what would least improve the effectiveness of their e-learning. illustrates their responses, broken down by region. While nearly a quarter of U.S. and Canadian respondents (24%) feel that improving off-the-shelf e-learning content would yield the smallest benefits, only 7 percent of European respondents saw such content improvements as least desirable. For Europeans, improving learning management capabilities held the lowest promise for improving the effectiveness of their program. And Asian respondents, reiterating the earlier finding, saw high-level support for e-learning as the least compelling option.

What have we learned?

The survey reveals that training and e-learning practitioners are somewhat more satisfied with the quality and effectiveness of their programs than recent discussion suggests. At the same time, it shows that there’s considerable room for improvement, both in the technologies used to support e-learning and the quality of e-learning content. And survey participants fingered a number of factors they feel would improve the effectiveness of their program, including greater use of custom-developed content and greater personalization capabilities.

In perhaps the most surprising finding in the study, significant disparities exist among practitioners in North America, Europe, and Asia on their overall view of their e-learning efforts. The generally positive view held by U.S and Canadian respondents isn’t echoed by their European and Asian counterparts. Industry observers cite two contributing explanations: cultural differences among these regions in perceptions and use of organizational learning, and the different levels of maturity of e-learning in each region

Equally surprising is the positive regard that practitioners have for commercial off-the-shelf e-learning content. To be sure, their assessment isn’t stellar, but the fact that nearly two-thirds of participants (59%) rate off-the-shelf content as medium-quality or higher stands in sharp contrast to widely held perceptions. The finding suggests that content quality isn’t the Achilles heel of e-learning that many have suggested, and that a number of other factors contribute to quality and effectiveness concerns. The high value that respondents place on personalization and interactivity of e-learning content suggests that improvements in these areas through use of learning object approaches and new content development capabilities will foster further gains in satisfaction.

A substantial majority of survey respondents (64%) have made efforts to quantitatively gauge the effectiveness of their e-learning programs, another surprising finding. Though the survey reveals many of these efforts involve basic approaches such as calculating cost savings over traditional training methods and learner “completions” of e-learning courses, sizable percentages are pursuing more sophisticated approaches, including measuring e-learning’s impact on customer satisfaction and bottom-line gains.

Published: May 16, 2003

地址:北京朝阳区建国路88号现代城A座2604 邮编:100022 
电话:010-85806056\85804383 传真:010-85804384