To Infinity and Beyond (SCORM): In-Depth Analysis of Learner Behavior

For those who haven't been in the eLearning game since the late 90s, the days before the SCORM standard was released were chaotic days. A new LMS seemed to pop up on a monthly basis, and each one had its own way of creating and launching courses. If you should decide to switch to a different LMS, you had a major problem— and a major expense— since the API format of one LMS most likely did not match another.

As the SCORM standard pushed the plug-and-play concept, it began the slow path towards industry-wide adoption. (Does anyone remember the "plug-fests" where early adopters of SCORM would show up and plug their courses into any SCORM-compliant LMS to on-lookers' amazement?) I believe that this process was expedited when the larger LMS companies started buying up the smaller ones; now there was incentive from the LMS providers themselves to establish a stable playing field.

SCORM grew from the initial version into version 2004, and beyond into the Experience API (xAPI) (which was the next quantum leap to allow more informal training to also be tracked), but there are still important gaps.

For example, learning based on use-case scenarios has picked up more and more momentum as instructional designers become more proficient at taking dry material and converting it into a more story-oriented format for better retention. (Remember when "One Minute Manager" was cutting edge for presenting management concepts using story format instead of a more didactic prose?) But making use-case scenarios really come alive requires branched learning.

However, even branched use-cases have a major drawback. Branched interactions are excellent for a user experience, but managers rarely have any way to see how the user got to the end point; they only know that the end point was reached when the course status is "complete."

We recently have been working with an innovative client who wanted a series of systems training designed where the number of clicks to reach the end point correctly were counted. It is the closest that I've seen to a realistic account of what is happening within a user's experience for a manager. (And don't get me started on "Test Me" formats in Captivate, which I find have very little instructional design benefit for most courses.)

What if a manager could see which options are being selected by the user as they walk down the various paths? What if a report could reveal all the users who consistently chose incorrect paths and may need further remediation?

SCORM cannot cover that kind of detailed API reporting, embedded within a course.

Recently, Tribridge released ContentSphere. ContentSphere is not an LMS; it is a platform designed specifically for tracking informal learning (like website articles, blogs, and YouTube videos) and traditional formal learning from various sites (courses launched by LMSs or content providers like, but it also will have the ability to connect with custom-built courses to report at a far deeper level than SCORM.

This deeper-level reporting will allow the course to communicate both via a SCORM/xAPI standards that can report back any aspect you want to track within a course. Decisions at a branch in the story? No problem! Number of times that an exam was attempted? Sure thing! Completion of a video or non-question interaction in a course? Absolutely! How about if the course window lost focus because the user was only letting it play in the background? Why not?

In fact, anything that can be captured within the course run-time can now be tracked and reported through ContentSphere, combined with tools like Watershed to aggregate that data into meaningful trends across many users. These tools can unlock the gate to improving training, improving user retention, and in the end, improving the bottom line.

To learn more about Tribridge ContentSphere, read our lightpaper.

Next Post