A great deal of training centers on the procedures people should follow, and this makes sense because procedures are essential, they are the tried-and-true means of performing a task well. And by relying on procedures instructors can determine if the trainee is making progress and learning how to do the task.
But procedures, while necessary, are often insufficient. Sometimes there are no procedures for complex tasks, and even when we can establish a set of procedures, trainees need to acquire the tacit knowledge (perceptual skills, pattern repertoire, rich mental models) to handle tough cases. Instruction has to push beyond procedures — it has to include cognitive skills to enable better decisions, more accurate sensemaking, more rapid problem detection, ability to handle uncertainty and ambiguity, and to manage risks.
How can instructors inject cognitive skills into their training programs?
I have developed a tool I call the “Cognitive Audit” that might be useful. It is a method to assess which cognitive training requirements an instructor might want to address. It is obviously based on some important previous concepts, such as work on tacit knowledge (Polanyi, 1958; Klein 2005), the description of macrocognition (Klein et al., 2003), the set of cognitive training outcomes described by Klein et al. (NDM paper), and on the Knowledge Audit (Militello & Hutton, 1998; Klein and Militello, 2004).
But the Cognitive Audit goes beyond these precursors by incorporating what my colleagues and I have learned about cognitive skills training from our ShadowBox projects.
What differentiates the Cognitive Audit from its predecessors is that it includes trainable cognitive skills. The other frameworks identify aspects of expertise, and are very important, but many of these aspects are not directly trainable. For example, the Cognitive Audit does not include decision making, which is not trainable. It doesn’t include sensemaking, which is not a general skill that can be trained. That’s how the Cognitive Audit differs from the components of macrocognition. Further, the Cognitive Audit does not include gaining a sense of typicality (which enables someone to detect an anomaly) — this is part of tacit knowledge, but it is not easily trainable. Finally, the Cognitive Audit does not include elements of the Knowledge Audit such as Past/Future, or Big Picture, or Anomalies. Although these probes are useful in doing a cognitive task analysis, they are not trainable cognitive skills.
Of course, cognitive training, using ShadowBox or other techniques, should help people improve decision making and sensemaking, and so forth, because trainees will gain richer mental models, better mindsets, and more expertise. It’s just that these outcomes cannot be directly trained as skills.
What’s left? Here are ten cognitive training requirements that may be relevant for a training program. They are the components of the Cognitive Audit. If you are seeking to cognitize an instructional program, you might start by determining which of these is particularly important for gaining expertise. These components do appear to be trainable.
Cognitive Audit: The cognitive training requirements in a domain
Mindsets. These are often single beliefs that affect the way a person understands and manages situations. For example, a training program might help a person shift from a Procedural mindset to a Problem Solving mindset. In law enforcement, you might want to design training scenarios that help people shift to a mindset of trying to build trust in each civilian encounter.
Boundary Conditions for Procedures. As I mentioned, earlier in this essay, procedures are usually essential, but for cognitive training you may want to describe the boundary conditions for following procedures — the conditions under which someone might abandon the procedures or modify them.
Goal tradeoffs. It is fine to describe the relevant goals for a job, but the hard part is when we have to manage goal tradeoffs, because there are usually several goals to juggle, and these may be in conflict with each other. Practice is needed in making these tradeoffs.
Job smarts. This component comes directly from the Knowledge Audit — the tricks of the trade that people need to learn in order to perform their work efficiently.
Workarounds. The ability to improvise depends on experience and resourcefulness and is not directly trainable, but what is trainable, I believe, is a mindset of being adaptable versus clinging to procedures. This requirement was part of the original Knowledge Audit. Many of the Tactical Decision Games published in the Marine Corps Gazette feature a requirement to adapt or replace a plan in order to work around an unforeseen barrier.
Managing uncertainty and ambiguity. This component is also about training a mindset shift. I do not think there is a general skill for managing uncertainty — the training has to do with making decisions despite uncertainty as opposed to delaying in order to gain more clarity. The issue of uncertainty management appeared in the very earliest formulations of Naturalistic Decision Making (Klein et al., 1993).
Problem detection and diagnosis. Decision makers can benefit greatly by spotting weak signals and detecting problems at an early stage (Klein et al. (2005b). There is no reason to believe that problem detection is a general skill. Rather, the skill is in the mindset to attend to weak signals and imagine their significance. Another part of problem detection is for decision makers to gain the experience to anticipate what the weak signals might imply. The topic of problem detection was included in the concept of macrocognition.
Attention management. What to notice, what to ignore? With experience, people get better at allocating their attention, which is why it appears to be a trainable skill and was mentioned in Knowledge Audit and in the macrocognition diagram. One of the variants of ShadowBox gets directly at attention management skills.
Perceptual discrimination. Experts have learned to distinguish subtle cues that novices don’t pick up, as discussed by Klein & Hoffman (1993) in their essay Seeing the Invisible, and by Klein (1998) in Sources of Power.
Common Ground and Coordination/Predictability. Teams improve their coordination as they gain more experience working together because the team members learn to predict each other’s responses, and also because the team members achieve greater common ground (Klein et al., 2005a). Therefore predictability and common ground become important training requirements, as described in the macrocognition approach.
The Cognitive Audit overlaps with previous descriptions of macrocognitive skills. What sets it apart is that it emphasizes trainable aspects of cognitive performance. That is why the Cognitive Audit may be useful for appraising training requirements for a given activity and for guiding scenario development and training strategy. When working with existing scenarios, the Cognitive Audit might suggest ways to “cognitize” these scenarios — ways to modify the scenarios to tap into the components described in this essay.
Klein, G. (1998). Sources of power: How people make decisions. Cambridge, MA: MIT Press.
Klein, G. (2005). The power of intuition. New York, NY: A Currency Book/Doubleday. Hardcover edition entitled Intuition at Work (2004).
Klein, G., Feltovich, P. J., Bradshaw, J. M., & Woods, D. D. (2005a). Common ground and coordination in joint activity. In W. B. Rouse, & K. R. Boff (Eds.), Organizational Simulation. New York, NY: Wiley.
Klein, G. A., & Hoffman, R. R. (1993). Seeing the invisible: Perceptual/cognitive aspects of expertise. In M. Rabinowitz (Ed.), Cognitive science foundations of instruction (pp. 203-226). Mahwah, NJ: Lawrence Erlbaum Associates.
Klein, G., & Militello, L. (2004). The knowledge audit as a method for cognitive task analysis. In H. Montgomery, R. Lipshitz & B. Brehmer (Eds.), How professionals make decisions. Mahwah, NJ: Erlbaum.
Klein, G. A., Orasanu, J., Calderwood, R., & Zsambok, C. E. (Eds.) (1993). Decision making in action: Models and methods. Norwood, NJ: Ablex Publishing Corporation.
Klein, G., Pliske, R. M., Crandall, B., & Woods, D. (2005b). Problem detection. Cognition, Technology, and Work, 7, 14-28.
Klein, G., Ross, K. G., Moon, B. M., Klein, D. E., Hoffman, R. R., & Hollnagel, E. (2003). Macrocognition. IEEE Intelligent Systems, 18(3), 81-85.
Militello, L.G., & Hutton, R.J.B (1998). Applied cognitive task analysis (ACTA): A practitioner's toolkit for understanding cognitive task demands. Ergonomics, 41: 1618-1641.
Polanyi, M. (1958). Personal knowledge. University of Chicago Press.