A licensing exam can identify people qualified to use a term like “psychologist” or “social worker.” Or it can identify people with certain competencies associated with those terms.
In the case of statutes that merely restrict the use of a label, the current multiple-choice licensing exam in psychology purports to be face valid, but it is in fact a collection of items that appeal to the test creators. I learned all I needed to know about the licensing exam when I took it in 1979. An item I initially left blank asked something about which animal had contributed the most to psychological knowledge. The choices included the pigeon, the rat, and the rhesus monkey. When I finished the exam, I was looking over the test booklet, and I saw that Harry Harlow was one of the listed consultants on test construction. Skinner was not listed. So I went back to the item and checked the rhesus monkey. (Harlow did that wire monkey research.)
Another issue with the licensing exam is that it is simply a cognitive test. The applicants who are smartest at that kind of thing tend to do better on it, not because they have enhanced knowledge of psychology, but because they are good at taking tests. Timed crossword puzzles would achieve similar results, without all the rigmarole about validity.
Instead of a consensus among consultants on which multiple choice items should be answered correctly to allow you call yourself a psychologist, shouldn’t psychologists adopt an empirical approach? One idea would be to have psychologists rate each other locally and nationally on a simple measure of expertise. You wouldn’t rate everyone, just people whose work you know. This would enable the test constructors to identify a consensus pool of experts. Then, instead of asking them what should be on the test, they could be asked or paid to take tests themselves. Comparison groups might be recent college graduates, poorly rated psychologists, and psychologists who got kicked out of the profession. Items retained for the test should be those that recognized experts on average answer correctly at a significantly higher rate than the other groups.
That’s well and good for states that license the term, “psychology.” But for states that license practice, you’d have to start by identifying the best clinicians. The difficulty of doing this encourages all those evidence-based and empirically-supported treatment purveyors. It feeds their implicit belief that enhanced training is as irrelevant to therapy as it is to, say, treating all the conditions that PA’s and nurses now treat instead of doctors. One problem with identifying clinical experts that there is simply no way to find out what people actually do clinically. I referred a friend to a well-known, highly-respected therapist in a major city. The therapist tried to talk the patient into seeing her by badmouthing anyone who took insurance (this therapist did not), and by calling her after the appointment to say how well they could work together, and by texting her with a similar message late that night. A large group of psychologists won’t recognize that this behavior means the therapist, despite her reputation, is incompetent. This is because there have been no empirical studies to show that these kinds of solicitations interfere with therapy as measured by outcome assessments like the Hamilton and Beck. (A shockingly large number of studies purporting to measure therapy outcomes use self-report inventories of this sort, ignoring whatever stake the patient might have in answering the questions one way or the other.) Anyone who recognizes that this well-known therapist is incompetent should have to admit that it is not a simple matter to figure out who is good at therapy and who is not.
Now our profession is about to enter a world of circular logic that will leave excellent clinicians on the sidelines. Licensing mavens are starting to argue that you should have to demonstrate your knowledge of evidenced-based and empirically-supported treatments to pass the exam. These treatments reflect the research preferences of non-clinicians and their logical errors that treat depressions and anxieties like viruses and bacteria. In the not too distant future, you can count on “psychologist” and “psychiatrist” to mean that the bearer of the label lacks the faintest idea of how to help people by talking to them; it will mean that the person has learned to follow recipes. Everyone will be a sous chef.
Really, I’d prefer a test that simply tracks whether the person starts and ends sessions on time. These variables are fairly easy to track, and they speak volumes to clinical ability, mastery of the situation, and empathy for the patient’s needs, especially those needs the patient may not be conscious of. It’s a skill that, as you’d expect with a sign of expertise, many advanced therapist acquire and almost no beginning therapists possess. You could add to that a component where candidates demonstrate an ability to develop a theory-contextualized case formulation, and one where they demonstrate that they don’t overly value any particular source of data (trying to eliminate those who believe as literal everything that patients say about their families, those who take the results of psychological tests at face value, and those who think that patients’ behavior in the therapy office is a solid indicator of how they behave elsewhere).
I’d also have no quarrel with a licensing exam that measured a sense of humor, critical thinking, taking things in stride, understanding analogy, self-awareness (i.e., self-mockery), perspective shifting, and the amount of reading the person does in literature, history, and philosophy.