Readiness Assessments: Measuring and Narrowing Gaps

When you endeavor to try to know what you don’t know, we call that a readiness assessment – it is a planning tool to help identify gaps in existing strategies and processes.

When you endeavor to try to know what you don’t know, we call that a readiness assessment — it is a planning tool to help identify gaps in existing strategies and processes. Readiness assessments accomplish three goals:

  1. They provide guidance for what you don’t know about learning analytics.
  2. They aid in building a learning measurement strategy.
  3. They document gaps in the current “as is” state versus the future “should be” state.

The readiness assessment is administered using a focus group or interview, or it is accomplished self-sufficiently via a survey. Interviews are optimal because the interviewer can use the one-on-one interaction to drill down into specific components and seek details and examples. The goal of the interview is to thoroughly understand the “as is” state.

There are four main components to a learning measurement readiness assessment: stakeholders, strategy, process and technology.

Stakeholders
Stakeholders represent the consumers of learning metrics. These might include:

  • Instructors
  • Instructional designers
  • Organizational development managers
  • Learning and development vice president
  • Chief learning officer
  • Line of business manager
  • Executive management

Understanding stakeholder needs is a way to identify the types of metrics that are of interest to these consumers. Some key questions to ask the learning and development organization with respect to this audience include:

  1. Do you know who the key stakeholders for learning metrics are?
  2. For what specific metrics are these stakeholders looking?
  3. Are there specific reports available to meet their needs?
  4. How satisfied are these stakeholders with the timeliness and usefulness of learning metrics today?

Further, the assessment should tackle the metrics balance, as different stakeholders have diverse needs. The assessment should look at the following four broad areas to assess stakeholder needs:

  • Operational data: Is there a need for activity data (e.g., number of students trained)?
  • Performance data: Is there a need to show how well you train as opposed to how much you train (e.g., satisfaction scores or linkage to business results)?
  • Financial data: Is there a need to show fiscal stewardship? (e.g., learning and development budget to actual or learning and development budget as a percent of payroll)?
  • Cultural data: Is there a need to cultivate an environment that supports learning and finds it strategic (e.g., the number of external awards or internal recognition points learning and development has received)?

Finally, the assessment should ensure the metrics stakeholders desire will be presented in a way that leads to timely and useful data for decision making. This is known as actionable data, and there are four ways to achieve this that should be investigated during the assessment:

  • Benchmarks: Do stakeholders value internal and external benchmarks to use as points of reference in interpreting their own data?
  • Goals: Can learning and development establish challenging yet attainable goals to show actual performance against stated goals?
  • Trends: Can learning and development measure consistently and repeatedly to derive trends so stakeholders know whether the metrics are moving in the right direction?
  • Color coding: Do existing stakeholder metrics present information in a manner that is easy to read with a red-yellow-green analysis to quickly pinpoint opportunities?

Stakeholders’ analysis for learning metrics is important because they use the information — success is more certain if aligned with stakeholders’ desired outcomes.

Strategy
Strategy is developing the approach to learning metrics. This is reviewed in a learning measurement-readiness assessment to understand the methodology and functional design of the learning analytics process.

Some key questions to ask during this phase of the assessment include:

  1. Do you have a strategy? Is it practical to implement?
  2. Does the strategy focus on impact and ROI?
  3. Does the strategy consider benchmarking?
  4. Does the strategy consider standards and consistencies?
  5. Does the strategy provide meaningful information in a timely manner?

It is challenging to build an analytics policy or process without an overlying strategy. Hence, the importance of this section — if key decisions on what, how and when to measure are not answered, there is a risk that learning will be measured inconsistently and will not be comparable across the organization. Further, effort is likely to be duplicated, additional cost is likely to be incurred and timeliness of providing metrics likely will be prolonged.

Because of these risks, the organization must have business rules or standards for learning measurement strategy. Some core rules, which ought to be included in a measurement strategy, might be:

  • The key performance indicators for the learning and development organization.
  • The measurement protocols or standards (especially for evaluation).
  • The linkage to business strategy.
  • The linkage to credible measurement methodology.
  • A process to execute the strategy.

Process
Process represents development of learning measurement’s inputs, activities and outputs. This is reviewed in a readiness assessment to understand the physical and financial resources involved in producing the metrics. The assessment should ensure the process is practical and repeatable, given existing financial, physical and human resources. The assessment also should ensure the process, as stated, is functioning as designed.

Some key questions to ask during this stage of the assessment include:

  1. Are key performance indicators monitored regularly?
  2. Do you evaluate 100 percent of learning events?
  3. Is there a formal budget for learning measurement?
  4. Are there dedicated resources for learning measurement?
  5. What percent of time is administrative versus value-added?

The key areas of concern, those to drill deeper with the interviewee, are the following core elements of measurement:

  • Data collection: Look at the key performance indicators and ensure a balance exists (operational, financial, cultural, performance). Study the evaluation instruments and ensure they consistently collect the right data. Understand the volume of learning to ensure collection is scalable. Review technology tools used in the collection process for efficiency.
  • Data storage: Review the database(s) and look for a central database or a way to consolidate data into a data warehouse. Understand the security surrounding the database(s). Determine how easy it is to extract data out of the database(s).
  • Data processing: Determine whether there are self-sufficient query capabilities. Review how information is aggregated and filtered.
  • Data reporting: Review the standard reports given to stakeholders. Study reporting functionality and features such as benchmarking, trend lines, statistical sophistication, drill-down capabilities and executive summary capability.

The key is to recognize that these four activities are administrative and, therefore, not value-added. The resources allocated here should be minimized by having solid business rules, standards, templates and technology.

The goal is to get to reports quickly so users can use them for analysis and decision making. Hence, you need to study the learning measurement process carefully during a readiness assessment.

Technology
Last but not least is technology. It is studied to understand where manual tasks can be automated and to help streamline administration.

Some key questions to ask at this stage include:

  1. What percent of data is gathered online versus on paper?
  2. Is there a central database for storage?
  3. Is there an online analytics processing (OLAP) database for custom querying?
  4. Is it easy to drill down for tactical analysis?
  5. Are results available in real time through self-sufficient tools?

There are areas of overlap between technology considerations and the process portion of the readiness assessment. This is perfectly acceptable. The process area looks at transactional flow, whereas technology looks at the mechanisms to produce the transactions within the flow and their degree of technology sophistication.

For example, use of a basic Internet survey tool can make a world of difference in data collection, as opposed to manual paper processing.

In addition, SQL databases might have optimal central storage features over multiple worksheets in Excel. Further, OLAP tools for querying the database for processing might lead to self-sufficient users versus IT department involvement when a query needs to be done.

In today’s world, technology should be considered throughout the process. Technology is inexpensive and powerful, and it can minimize administration, leading to a shift to maximize time spent on analysis, communication and improvement.

Broader Human Capital and Talent Management Implications

Learning and development is increasingly asked to play a broader role in the talent management and human capital management area. How are these areas measured? What makes a complete set of processes for these areas? These are questions with which many organizations struggle.

To this end, a human capital process-classification scheme has been derived to help identify a complete set of processes that need measurement:

  • Manage deployment of personnel: Forecast workforce requirements; recruit, select and hire; succession planning; international assignment; mobile workforce; employee turnover.
  • Manage competencies and performance: Competency management, performance appraisal.
  • Develop and train employees: Onboarding, learning and development, coaching and mentoring, leadership development, knowledge management.
  • Motivate and retain employees: Compensation and benefits, employee satisfaction, employee engagement, work-life balance, workforce diversity.

From the above scheme, you can identify all the areas of human capital process measurement in a complete manner. From this point, you can then use diagnostic tools to assess the readiness of measurement sophistication within each to determine where to focus efforts. Most of these processes need better management, and measurement is a start to getting a handle on this.

A learning measurement-readiness assessment (and in broader terms, a human capital- or talent management-readiness assessment) can help diagnose gaps in existing processes and pinpoint opportunities for improvement in a prioritized manner.

To perform the assessment, it might help to have someone who not only knows the domain (such as learning and development, human capital or talent management) but also is an experienced interviewer or focus group facilitator.

Finally, from a change management perspective, to avoid political agendas and ensure everyone provides objective responses, a third party might be the best option for leading these assessments.