Learning Experience Data

We’ve had a few thoughts about the relationships between any learning experience and the data that can be generated during that learning experience.

What are the relationships between learning experiences and data?

We’re going to walk through iterative aspects of learning experience design, research, and evaluation to understand relevant uses of any data that can be generated by participants and mechanisms involved in any learning experience.

Before we dive in, let’s review a few foundations to remember about learning, assessment, design, evaluation, and research.

The general design production principle: fast, cheap, and good – pick two.  If you want good work done quickly, it’s going to be expensive.

Learning is purposed communication between people, information, and machines.  Learning experiences can happen over different durations of time.

Learning cannot be measured, and it really can’t be assessed either.  Learning experiences can be evaluated based on relevant valid assessment decisions that are informed with observable evidence (data) about patterns of activities that may demonstrate learning has occurred, collected using reliable measurement instruments.

Assessment is a machine that facilitates evaluation.

Research requires data collection throughout any learning experience, regardless of whether you or someone else is conducting research about that learning experience.

So, it’s good to have an assessment plan for each stakeholder, persona, or user type involved in the learning experience over time.  Figure 01 is an example diagram of what an assessment plan may look like.  Notice there are streams of data continuing on either side of the plan.  Consider the plan as a window of time, almost like placing a box frame into a river.

Assessment data collection plan example diagram
Figure 01. Assessment data collection plan example diagram

It doesn’t really matter what the plan looks like, as long as you document the plan.

Along the same lines, it’s good to have a research data collection plan.  It’s also a good idea to create a learning experience evaluation plan.  What are the data you need to collect to understand whether or not the learning experience you designed is actually working the way it should?

We need to collect data for assessment, research, and evaluation.  Three kinds of data, or at least three uses.  Whenever possible, figure out ways to use the same data collected about a stakeholder to serve at least two, if not three, of these purposes.

Which types of learning experiences do this data strategy apply to?  In short: any learning experience that is possible, regardless of audience, setting, or outcome.  

If you want to measure, assess, evaluate, and research learning, you’ve got to collect data.  

Most likely you’re going to collect a lot of data.

Learning Activity Data Streams

Anything a learner does before, during, and after a learning experience is an opportunity for data collection, and we can orient these data into streams of demonstrated activities.  These activities can be of various shapes, sizes, durations, and importance, but the one thing they have in common is their arrangement in sequence over time.  Figure 02 shows an example of one way to visualize these data streams.

Example graph of learner activity data streams
Figure 02. Example graph of learner activity data streams

In this graph, time flows from left to right, and the time range within view is scalable based on the current pattern of activities under observation, with the data streams continuing forward and backward in time. This could represent several minutes or several days of activities.

Here, data uses are arranged vertically, with assessment above and research below the midline. In many cases, these data could be the same, simply mirrored on either side of the line to reflect a double use of those data.

This graph represents one learner’s activities, and it may not even be a complete view of all data collected about the activity stream during this time frame under observation. Also, this stream view doesn’t just apply to learners. Consider the demonstrated actions of other stakeholders as well: teachers, coaches, assessors, parents, managers, administrators, etc.

Everyone engaged in any learning experience (for any length of time, regardless how short) is demonstrating a performance relevant to a complete understanding of the learning experience.

Grouping Synchronous Activity Data Streams

During any learning experience, regardless of duration, any number of people will be involved.  They may or may not be interacting directly with one another, but they are demonstrating actions at the same points in time.  For example, outside the classroom, students and teachers may be simultaneously working on different aspects of the learning experience, such as homework and grading.  Figure 03 shows one way to visualize these synchronous activity streams.

Visualization of synchronous activity data within some portion of a learning experience.
Figure 03. Visualization of synchronous activity data within some portion of a learning experience.

Such a visualization is just a suggestion.  Depending on the learning experience, these synchronous activity data streams should be visualized in a way that facilitates assessment, research, and evaluation, which may even require three different forms of visualization.

These groupings could also be multiple aspects of one person’s activities.  For example, how many different variables are (or should) be involved in any research study conducted about the learning experience?  The X and Y axes of this “arena” of space for activity visualization are flexible, able to include any number of variables or values.

The key here is that we can align performances of any number of actors over time, allowing for better assessment decisions and evaluation judgements, as well as contextualized research that may be easier to communicate about to a broader audience.

Expanded Perspectives: People, Machines, and Information

To complete the picture of assessment, research, and evaluation, it is important to observe the behaviors of any machines and information that are involved in the function of the learning experience.  To observe these behaviors, we must collect data about the activities of machines and information just as we are collecting data about people.  Figure 04 shows one way to stack these arenas of activity data streams.

Stacked arenas for simultaneous activity data visualization of people, machines, and information.
Figure 04. Stacked arenas for simultaneous activity data visualization of people, machines, and information.

It’s important to remember that many machines involved in any learning experience are virtual machines.  One example of a virtual machine is the collection of algorithms that may be employed as “assessment bots” in service of human assessors.

These arenas can and should be visualized in three or two dimensions, in whatever observational arrangements are best for the necessities of assessment, research, and evaluation.  There’s no reason these modular arena stacks can’t be laid out in chronological sequence, like a row of bricks, if that is relevant to the form of observation.

For your convenience, Figures 04A, B, and C show each layer of activity data streams individually: people, machines, and information.

People Activity Data Streams
Figure 04A: People Activity Data Streams (Isolated)
Machines Activity Data Streams (Isolated)
Figure 04B: Machines Activity Data Streams (Isolated)
Information Activity Data Streams (Isolated)
Figure 04C: Information Activity Data Streams (Isolated)

These arrangements of people, machines, and information activities can also be split (Figure 05) and sliced (Figure 06) in any number of ways relevant to the assessment, research, or evaluation task currently pursued.

Split view of stacked arenas.
Figure 05. Split view of stacked arenas.
Stacked slices of synchronous activity data streams for people, machines, and information.
Figure 06. Stacked slices of synchronous activity data streams for people, machines, and information.

Notice the data streams in Figure 06 are arranged vertically as assessment and research data streams as they were in Figure 02.  This gives one the opportunity to compare research and assessment performance across people, machines, and information within synchronous time windows.

Understanding Interoperability for Iterative Improvement

Whether or not there is an intention to conduct research on any learning experience, it is assumed that the design and evaluation of this experience are on some sort of iterative cycle.  In order to iteratively improve any learning experience, it is a good idea to understand and refine the interoperability of people, machines, and information that form the communication cycles of that learning experience.  Figure 07 shows one way to continuously evaluate this interoperability.

Pathways for evaluating interoperability of people, machines, and information within a learning experience.
Figure 07. Pathways for evaluating interoperability of people, machines, and information within a learning experience.

There’s no reason to wait until the learning experience is “completed” by the learner for any evaluation (or research) process to begin.  This doesn’t imply that any functional or structural changes should be made to the learning experience as it is happening, but it’s good to observe the experience in action to generate formative notes for evaluation and research processes.

Here are some questions to consider:

  • Are the learning-oriented communication cycles optimized between people, machines, and information?
  • What’s working?
  • What could work better?
  • What’s broken, or might break under certain conditions?
  • Are the assessment, research, and evaluation plans unfolding as expected?

Essentially, you’re now observing the observed activity performances at a higher level, sometimes called a meta level.  As this formative research and evaluation work is happening, it can be good to think about which of these meta-level observations of learning experience consumption, production, and delivery functionality are most relevant to improving the continued assessment, research, and evaluation of the learning experience as it is iterated and implemented again.