Tutorial: Rubric Design and Creation

Overview

This tutorial presents the Rubric API, and explores the role of the Rubric object in the manual scoring of training sessions. It uses as working example the "Chernoff Face" widget used in the instrumentation tutorial.

Target Audience

Learning Experts who want to understand how to train the system by manually scoring selected sessions, and developers who need to register in the Metacog Platform the required Rubrics.

What are Rubrics and what are they used for?

In previous tutorials we had seen how to instrument a widget in order to log custom events to the metacog platform, how to produce aggregated data files to feed reports and how to embed the widget in the context of a Learning Task

To be able to register in real time the detailed actions of hundreds of learners and then retrieve their data is nice, but is only the first step of what Metacog can do for you.

The next step is to define scoring rules (the rubrics) and provide a set of manually scored sessions (the training sessions) so the Metacog Platform can learn from them and start to score learner sessions in real time:

Rubric Design

A Scoring Rubric is an attempt to communicate expectations of quality around a task. In many cases, scoring rubrics are used to delineate consistent criteria for grading. [ref].

In addition to the basic attributes (id, name, description) Metacog Rubric's are structured around the concept of dimensions (criteria). A dimension is an aspect of the learner's performance that need to be evaluated.

The Rubric may have as many dimensions as needed, making the design of a Rubric a matter of focusing in the design of each one of the available dimensions.

This is the structure of a Dimension:

  • Name
  • Description
  • Type: Define the kind of values accepted by this dimension. It may be: "ordinal" (a list where order matters), "categorical" (a unsorted list) or "numerical" (a min-max range).
  • Params: define the options or the numeric range of valid values, depending on the Type: for both "ordinal" and "categorical" it expects an array of strings named "items". For "numerical", it expects min and max attributes.
  • Indicators: An optional list of tags to be applied at arbitrary timestamps in the event stream. You can think on them as a way of annotate the data stream, marking intervals of interest.

The following details each type and its valid parameters:

TypeDescriptionParameters
numericala numeric value within a given rangemin: number,
max: number
categoricalunordered list of categoriesitems: [category list]
ordinalordered list of valuesitems: [value list]

First dimension will be used as the top score of the Learner Session, while the other dimensions will provide additional information.

For the "Emotion Challenge" example, we are going to define two Dimensions, as follows:


Dimension 1: Task Performance
DescriptionIndicates if the task's goal was achieved
Typeordinal
ParametersInsufficient, Partial, Complete
Indicatorsans1, ans2, ans3, ans4 (we want to mark at which moment the learner solved each emotion)

Dimension 2: Expressiveness
DescriptionIndicates wheter the Learner uses multiple facial indicators to reinforce emotion representation
Typecategorical
ParametersPoor (less than three features), rich (more than three features)
Indicators(none)

Using the Rubric API

Once there is a Rubric Design, it should be uploaded to the Metacog Platform. This step involves to describe the Rubric as a JSON object and invoke the appropiate API endpoint by using any HTTP client, like CURL, PostMan, the Metacog Sandbox or the Metacog Client Library API module.

Preparing the data

This is how the Rubric looks after it had been represented in JSON format:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
  "name": "emotion_challenge_rubric",
  "description": "default rubric for testing",
  "dimensions": [
    {
		"name": "Task Performance",
		"description": "Indicates if the task's goal was achieved",
		"type": "ordinal",
		"params": {
			"items": ["insufficient", "partial", "complete"]
		},
     "indicators": ["ans1", "ans2", "ans3", "ans4"]
    },
    {
		"name": "Expressiveness",
		"description": "Indicates wheter the Learner uses multiple facial indicators to reinforce emotion representation",
		"type": "categorical",
		"params": {
			"items": ["poor",  "rich"]
		},
     "indicators": [] 
    }
 ]
}

Invoking the API endpoint

The API endpoint used for creation of new rubrics is POST api.metacog.com/Rubric. We are going to use the Metacog API Sandbox to make the call, passing the application_id and publisher_id used in all the examples, along with the authentication token created in instrumentation tutorial: 52edbedaaffb5bd012bf71e56acdd5664296e498ca80f607ab25379c01e2089c

Note: authentication token provides an isolated environment for grouping Training Sessions and Scored Training Sessions.

The above picture presents the output of the API call when using the Metacog API live documentation service. Notice that a JSON similar to the one we send is obtained as answer, but this time with some additional fields, the most important of them is the Rubric id:

bbbcd928-4200-4256-ae0a-81978de7f2bb

Keeping this identificator at hand will allow us to retrieve the Rubric for future operations (even it is not needed if you expect to use only the Training Toolbar: the UI includes search-by-name functionality of all the rubrics created for the same publisher).

Summary

In this tutorial we covered the concept of Rubrics, from the point of view of their design and also the technical details of their creation in the Metacog Platform. In the next tutorial, we will see how Rubrics are used in combination with the Training Toolbar to create Training Sessions.