Instrumentation Guidelines


This document presents some guidelines on how to instrument learning objects for logging events to the metacog™ service using metacog's JavaScript Client Library.

The logger API does not force the use of a predefined structure or set of events. The developer has freedom to design the set of events that best represent the data to be collected and analyzed given the specific project requirements. Nevertheless, the following general steps may help to design the best instrumentation strategy (further details appear in the following sections):

Step 1: Identify events and models.

A good approach to designing the instrumentation of a learning object is to focus on the events and their relation to the model. Inspect your learning object, identifying and classifying possible events of interest. They may belong to any of these categories:

  • Application events:These events are associated to buttons and elements in the user interface that do not have a direct impact in the status of the model. For example, a click on a help button that displays instructions about the use of the learning object does not affect the status. It may however be of interest to log it, because it is an indicator of how much the learner understands the goals of the exercise.
  • Model events:Other events have direct effects on the status of the model. They may be associated to UI elements, like a drop-down to select the level of complexity of a simulation or a reset button. In other cases, they may not have direct UI mappings. For example, the actions taken by an AI agent or random events generated by the model during a simulation, and others.
  • Model data:According to the complexity of the model, it may be convenient to log its status every time a model event is triggered. It is especially significant when the changes affect a high percentage of a model status. Consider what the status of your model is before and after each event.

Step 2: Map events and models to source code.

Once you have identified events and model status, it is time to check the source code to establish a mapping with variables, methods and objects. Given the high degree of flexibility of JavaScript code, the use of third-party libraries and frameworks, and the requirements of the publishing mechanism of the learning-object container, the degree of difficulty may vary greatly.

A good instrumentation will not modify the target source code at all, but will plug the logger to relevant variables and methods from a separate JavaScript file. The figure above shows the complexity of instrumentation will be higher when there is a high coupling between view and model, and when there is not an easy way to access JavaScript variables that represent the status of the learning object. The latter is the typical case of a learning object that relies heavily on local variables inside jQuery callbacks to drive its logic. On the other hand, good practices like model-view separation and exposing your model's functionality through facades or business object methods greatly reduce the instrumentation effort.

Important: There are two styles of instrumenting a Learning Object: only for logging, or with playback support (required for recording and scoring of Training Sessions). The first mode is less instrusive and its goal is to avoid to touch the original source code of the Learning Object as far as possible. This is the mode described in this guide. For the mode with playback support, please check the Playback Guide.

Inclusion of the logger library

Typically, the only file you must modify in order to include the logger library is the index.html (or its equivalent) to add the relevant script tags, and a new JavaScript file, for example, instrumentation.js(the exact name is not important), to keep isolated all the new code with the event bindings and calls to the logger:

<script src="" type="text/javascript"></script>
    <script type="text/javascript" src="scripts/instrumentation.js"></script>

Please be sure to replace the x-x-x with the latest Client Library Version.

In the above HTML snippet, notice that the main library had been referenced from This is the preferred way, because the CDN will assure that the browser will download the file from the nearest available location. Also, notice that the file chosen was metalogger-x.x.x.js, instead of metacog-x.x.x.js. metalogger is a compact version of the library with just the logging functionality.

On the other hand, the instrumentation.jsfile is a local file, stored in the scripts subfolder. Both script tags are being included at the end of the body, making sure all the other scripts are loaded before.

Initializing the library

It is time to put some code inside the instrumentation.jsfile. In order to be able to send events it is necessary to to initialize the library passing valid credentials and optional configuration options:

    "session": {
        "publisher_id": 'YOUR_PUBLISHER_ID', //provided to you as part of Metacog API Credentials
        "application_id": 'YOUR_APPLICATION_ID', //provided to you as part of Metacog API Credentials
        "widget_id": 'YOUR_WIDGET_ID',
        "learner_id": "OPTIONAL",
        "session_id": "OPTIONAL"
    log_tab: true,
}, function(){
  console.log("metacog is ready!"); 

This JavaScript snippet initializes the Metacog Client Library passing a configuration object with the minimal required information to make it work.

In this example, the fields publisher_idand application_idare provided to you when you sign up for the metacog™ platform, and will be the same for all the learning objects that you want to implement. This is an example of how they look (you can use these values for testing purpouses):

  • publisher_id: 9d10ead1
  • application_id: c7f16e0b559f05489e2b900f52f08a99

The field widget_idis defined by you. It must be different for each one of your instrumented learning objects, and will be used as a filter parameter in future data requests to the platform.

There are other values that can be passed in the configuration object, like verboseand idle. Check the logger documentation to learn more about them.

The second parameter of Metacog.init is a callback that is called when the library had been properly initialized. This is a good place to start the library's logging process, as we will see soon.

A note about security

If you need to offer access to instrumented learning objects outside of a secure environment, it is not advisable to expose both the application_id and publisher_id into the client side.

By doing so, any user may gain access to these credentials and starts accessing all the data stored in your Metacog account.

Instead of that, since the version 3.4.0 you may use your own backend infraestructure to offer support to the Client Library or to your own javascript code for a stronger initialization mechanism, that involves the use of a Metacog generated learner_token instead of the application_id. By keeping the application_id as a secret value in your own backend, you will effectively prevent access to your data from unsecure environments.

For knowing more about the secure initialization approach, we provide the "Toy Server" demo, that simulates the backend infraestructure of a Metacog's Customer. Keep in mind that all the other demos provided in our site uses the unsecure approach in order to present a straight-forward implementation and highlight different possibilities of the Metacog platform.


  • Toy Server Demo (pending real link)
  • Toy Server Documentation (pending real link)

Student and session identification

Given the fact that multiple students will be sending events from the same Learning Object, it is required to have a way to group all the events that belong to the same student and to the same session. If that information is not tracked, there will be no way to extract meaningful results from a future analysis of the recorded data.

The configuration object includes the fields session.learner_idand session.session_id. They are optional attributes; if any of them is not defined, the logger will create random values for them. This behavior is fine if there is no interest in analyzing different sessions of specific students, but only anonymous executions unrelated between them.

If the requirements of the analysis demand to be able to identify consecutive sessions of each students, values that make sense when analyzing the data must be provided at this step.

Logging events

The method Metacog.sendis the way to send an event to the platform. This is its signature:

  event: 'name', 

The method receives a event as a JSON object. The parameter eventis a string whose value is defined according to the needs of the instrumentation. Examples of event names may be:

  • "start_simulation"
  • "reset"
  • "new_attempt"
  • "on_click_item"

Events may need parameters. In the case of "new_attempt"this may be the ID number of attemps; for "on_click_item"this may be the id of the item and the current level of complexity. These parameters are passed in the form of a JSON object (not a JSON string) into the data attribute.

var data = {
  item: "item01",
  level: "2"

The parameter typemust be passed as one of the values declared in the metacog™ logger class:

  • MetaLogger.EVENT_TYPE.UI:Use this one for events generated from interactions with the user interface (click on buttons, forms, mouse events..)
  • MetaLogger.EVENT_TYPE.MODEL:Use this for events generated from internal logic in the Learning Object, or external events like those coming from a multiplayer game.

A call to Metacog.send will fail if the logger is not running; it is because the send method does not try to send the events inmediatly to the backend, but instead it will store them locally and assume that the logger process is running and will take care of them.

Starting the logger: processing the event queue

Calling Metacog.send does not mean that the event will be sent to the platform immediately. Internally, the logger implements a queue where the events are stored. Periodically, the logger loop picks up some events from the queue and sends a package to the server. In the case of a failure on the delivery, the messages are pushed back to the queue to be sent again in the next pass of the loop.

The logger runs its own internal timer-based loop, and depending on the configuration, it may generate automatic events periodically (for example, if idlehad been defined).

This loop does not start automatically. It gives the instrumentation the opportunity to start it when there is a real need to begin sending events. A typical scenario would be a learning object that is embedded in a larger webpage with lots of text or other non-interactive content that the learner is supposed to process before starting the interaction. In this case, it makes sense to start the logger only when the learner indicates readiness to begin the interaction, maybe by pressing a button. For simple scenarios, the done callback of the Metacog.init method is a good place to start the logger, too.

Starting the logger is as simple as:


Logger execution context and persistence

In which context is running the logger process? It depends on the persistence model, that can be specified through the storage attribute of the configuration object passed to the Metacog.init method:

  • "indexeddb": This is the default value. It will start a webWorker (a separated execution thread) with the polling mechanism over a IndexedDB database. This is the recomended way because your main thread will not be affected by the polling code.
  • "offlinestorage": if present, or if IndexedDB is not supported in your browser, the queue will run over the localstorage mechanism. LocalStorage is not available from webWorkers, and it means that the polling in this case will run in the same main thread that your widget's code.

Stopping event processing

There is also a Metacog.stopmethod available. It is not mandatory to stop the logger, you can send events until the user close the window. But if you really need to stop the processing, you can invoke Metacog.stop(cb) passing a callback that will notify you when the queue is really empty.

Why is this callback mandatory? Well, remember that the logger needs to read events from the queue, package them and sending through the network to the backend. If you are logging events faster than the network can handle, the queue will not be empty at the moment that you invoke stop. What will happen is that the logger will reject silently all new events logged after the stop call, and invoke the done callback only when the queue had been completely emptied.

Intercepting method calls

Up until this point we have talked about how to instantiate the logger within the learning object and how to send events to the platform. But if all the logger calls must be contained in the instrumentation.jsfile, how they are going to be connected to the relevant methods in the learning object code?

The answer may vary depending on the way each learning object has been implemented. In some cases the simplest way would be to use the same event-listener registration mechanism that the learning object uses, like JQuery, Backbone, or any other JS framework.

The logger also provides a helper function to perform method interception. The idea behind this method is that if you have a global function, or a method in a global object whose call can be used as a signal for sending events to the logger, a proxy function can be used to wrap the call to the original method, adding pre- and post-callbacks where the sending of events can be performed. The picture above better illustrates the concept.

In the following JavaScript snippet, we are invoking the logMethodfunction in order to intercept calls to a global function called OnresultsClicked. Each time OnresultsClickedis called, the postCallback will be invoked, sending the "screen_changed" event to the server.

        data: {screen: 'results'},   
        type: Metacog.Logger.EVENT_TYPE.UI

Interception not only works for global functions. It also works with methods inside global objects. There is also the possibility of using a pre callback. For further details about logMethod, check the logger documentation.

Dealing with preloaders

Method interception on global objects works well as far as the global object exists. But sometimes learning objects implement some preloading mechanism, causing the declaration and initialization of target objects to happen after the instrumentation.jsscript is executed.

The logger also offers a helper method for this situation. It works by describing the object whose existence must be monitored, and a callback that will be invoked as soon as the object appears:

function () {  
  Metacog.Logger.logMethod("event1", ..);
  Metacog.Logger.logMethod("event2", ..);
  Metacog.Logger.logMethod("event3", ..);

Notice that the call to Metacog.Logger.start();also can be included within the callback.


Each learning object to instrument may present particular conditions that would make the instrumentation process easy or hard. The helper methods in the logger class are designed to assist the developer to overcome some of these conditions while keeping untouched as far as possible the separation between instrumentation and the original learning object code.

Make sure to check the logger documentation and also the developer forums in order to get more ideas about how to plan your next Instrumentation.