Learning Analytics Frameworks
Learning data and analytics only provide value when they provide actionable insights that can be used to improve the learning experience. Frameworks provide a structure for developing and implementing analytics to maximize their value. This article reviews six learning analytics frameworks.
Why a Framework?
For most of my twenty years in higher education I have been engaged in assessing and evaluating learning. This includes assessment of student learning that takes place in the classroom as well as measuring student satisfaction, retention, and success. I have also developed non-credit learning programs for a variety of corporate clients, who were also interested in how I could demonstrate the value of their investment. I also happen to have a diverse set of degrees in accounting, information technology, psychology, and education, plus a doctorate in sociotechnological planning. I approach this problem from a variety of ways of looking at this world and comfort with both quantitative and qualitative data and analysis.
This is my first attempt at describing a set of different frameworks that can be used to structure thinking about learning analytics. I have been developing and teaching classes and programs longer than I have been evaluating them, and that experience as a creator informs my perspective. I am strongly motivated to use analytics to improve my own work in helping others learn.
Framework One: Streetlight Effect
The Streetlight Effect is the phenomena that the drunk looks for missing keys not where they were lost but under the streetlight which is where the light is.
In learning analytics, we see the streetlight effect often. We measure what is readily available to measure such as the number of participants or trainee evaluations completed during the learning activity.
We also see “vanity metrics.” These are metrics that we can brag about like number of participants in learning programs, but they do not have any alignment with larger organizational goals. Even in a college, enrollment is only the starting point. student success is graduation and what they can do with their education.
The advantage of this framework is that it is easy to implement. However, it does not produce much actionable data.
Framework Two: OKR
OKR is a performance system developed at Intel by Andy Grove and his team. Since then it is a favorite at Google and many other Silicon Valley companies. Objectives (O) define organizational, departmental, team, or individual goals. Key Results (KR) are the metrics for tracking progress and measuring success.
In the OKR approach, a company (or other organization) establishes objectives for a time period (typically a quarter but can be monthly). Each executive and each organizational subunit create their OKRs for the same period. The key is alignment to the top-level organizational goals. Each objective describes what each person/team will do to move progress on the top-level goals. These objectives should be a mix of objectives that cascade and others that are bottom-up but in alignment to the larger goals.
Each person in the organization should have 3-5 Objectives with 3-5 Key Results for each objective. Where there is a Key Result that defines quantity, there should also be one that measures quality to address the issue of quantity at the expense of quality.
The OKR approach is most effective as an organization-wide program, but it can be implanted at the departmental or even by a single individual. In learning, OKR’s provide a framework for learning. The objective might be a class or other initiative. The Key Results then would measure the implementation and effectiveness of the objectives. Even without larger institutional OKRs, OKRs can be aligned against the strategic plan or other sources to achieve a similar result.
Framework Three: Balanced Scorecard
The Balanced Score Card is a more widely used organizational strategy framework than OKRs, but it follows some of the same structures. The genesis of the Balanced Scorecard was a reaction to the traditional focus on financial performance. Since it is hard to impact financial performance directly, the concept of the Balanced Scorecard was to provide a balance of metrics that included financial outcomes along with outcomes related to Customers, Internal Processes, and Organizational Capacity. The core concept that metrics in non-financial indicators provide levers than lead to better financial outcomes.
The Balanced Scorecard also has an action-oriented approach beyond simply reporting and score keeping. Strategic Objectives identify how organizational strategy will be realized (much like the Objectives in OKR). Key Performance Indicators provide a measure of the effectiveness of Strategic Objectives like Key Results in OKR. The key functional differentiator between these two approaches maybe in agility versus consistency. OKR is designed to be agile and expects that objectives and key results will change even within a quarter. The Balanced Scorecard, on the other hand, is expected to be long-lasting and lead to dashboards to provide tracking of KPIs over time.
The Balanced Scorecard provides a direct connection to learning though the Organizational Capacity domain. Essentially, Organizational Capacity is increased through learning. Metrics, then, should reflect the organizational capacity that is being targeted for increase.
Another approach is to develop and implement the Balanced Scorecard for the learning department. Financial objectives can be tied to budget and expenses. Customer objectives include objectives of how many people are to be trained and on what. Internal Processes can include objectives on improving efficiency, greater participant satisfaction, and processes related to developing learning materials. Organizational Capacity describes how the learning team is developing its own capacity by expanding into new content areas or learning new methods of instructional design.
Framework Four: Learning Outcomes
Developing learning outcomes is a key aspect of designing any learning experience. In higher education, establishing and assessing learning outcomes has been a core concern of accreditors for the last twenty plus years. In corporate learning departments, learning outcomes go back further with the development of formal instructional design processes. One of Stephen Covey’s famous habits is to begin with the end in mind, and that is what we do with learning outcomes.
Learning outcomes can be derived from research and needs assessments that define what learners need to know or what the gap is between current knowledge and desired knowledge. In professional fields, associations may establish a body of knowledge or learning outcomes for professionals. The Project Management Institute has done this for project management. In higher education, every professional field has an accreditor that specifies learning outcomes. Lominger and similar competency frameworks provide another approach to defining learning outcomes.
In higher education, we often use Bloom’s Taxonomy of Educational Objectives. Bloom defined three domains of learning: cognitive, affective, and psychomotor. Typically, academics focus on cognitive outcomes, which was the domain Bloom himself developed. Others have developed hierarchy for the affective (emotion) and psychomotor (action) domains. I add a fourth domain: social to the mix. Taken together, these provide a model for identifying what outcomes are desired from the learning experience whether those are knowledge-based, emotional like motivation or confidence, action oriented like how to do something, or social on how to work with others.
The outcome of outcomes is to guide learning design, but they also provide a model for how to assess learner outcomes. Metrics that align to the outcomes provide a way of evaluating if the learning program worked or not.
Framework Five: Process and Output Framework
Another approach to metrics is to use a process model. Fans of Senge’s Fifth Discipline will find that this fits with his teachings. Learning is a process that produces certain outputs. Metrics can evaluate either the process or outputs. Process metrics can include measures of efficiency, satisfaction, or cost. Output measures can include what learners learned and also how they use that learning.
In a process model, we usually have a logical model that describes how the process works. For example, I might believe that if faculty use active learning strategies in their teaching, then students will have better persistence in the course and better retention of material. I offer a course to teach faculty how to use active teaching methods. Metrics need to evaluate each linkage in the model. I need to measure whether teachers actually incorporate active teaching strategies, and if so, does it lead to student persistence and retention of material.
One of the benefits of online learning is that it allows us to capture very granular data on learner behavior. This allows us to measure how changes in the learning experience lead to different learner behaviors. In one of my projects, we knew that students who completed an assignment at the end of the first week of their first college course were likely to finish the course. Our focus was how to boost success on the metric of completing the assignment. We were able to analyze all of the student behavior in the course to determine different patterns in not succeeding. We found that we had students who never engaged in the course, some who engaged late in the week and appeared to run out of time, and some who started strong but faded early. Each group needed a different intervention, which we could measure through our metrics. One finding was that students who did not click within the course at least 240 times were never successful. This gave us a measure of engagement and allowed us to exclude them from the evaluation of the course since they never really interacted with the material.
In education and other nonprofit areas, this approach is known as program evaluation. It is a flexible approach that can be integrated into other frameworks.
Framework Six: Learning as a Product
As an entrepreneur, a primary activity is developing products for which someone will pay money. Learning departments are often in the enviable position of having not just a near monopoly on training but also a mandate that employees must complete prescribed training.
What if, though, corporate learning and development developed products like an entrepreneur? When learners are customers who decide to invest money to purchase learning and time to yield the benefits of this training, the primary concern is how this learning meets a need of the customer/learner.
Jobs to be Done (JTD) is one model for product managers to assess what is it that customers need. For training, the job might be to be more efficient at a job or develop skills to advance. In my own work with faculty, I have seen the job to be social to connect with peers since teaching is often a solitary profession. Any professional development that allows faculty to talk to each other about teaching will earn high marks.
In this framework, metrics are needed to evaluate whether the learning experience matches the goals of the learners. Whether the training the outcomes don’t align with the learner goals, or if the outcomes align, but the learning fails to deliver, the learning is not a success. Knowing where it is failing is helpful in pivoting the product to lead to greater success.
Which framework is best?
I have ten children. I cannot tell you which is best of all time, but each has their moments. The same goes with the frameworks listed above. Even streetlight and vanity metrics can be valuable if you are aware of the inherent limitations. The strongest approaches are composites that integrate elements from more than one framework.
The framework is also not the destination. It provides a way to think about metrics. Identifying learning metrics and then collecting data and analyzing the data and using the results of the analysis each present their own challenges. Frequently in face of these challenges, the default decision is the streetlight metrics, which rarely align with useful information on the success of the learning program or provides any hints at how to improve it.