Artificial Intelligence
Who Should Use GenAI and for What?
When generative AI helps, when it harms, and why the difference matters.
Posted February 10, 2026 Reviewed by Monica Vilhauer Ph.D.
Key points
- Generative AI use should be based on who is using it and what task they are trying to accomplish.
- Using GenAI for performance tasks is generally lower risk than using it for deep learning.
- Errors from GenAI are especially harmful in early learning because misconceptions are hard to undo.
- Responsible GenAI use requires aligning technology choices with learning goals and expertise levels.
Post by Dr. Jeffrey A. Greene, University of North Carolina at Chapel Hill
Nowadays, when someone I’ve just met finds out I study how people learn with technology, they almost always ask: “Should people be using Generative AI (GenAI) or not?” That’s a tough question, with lots of dimensions to it*, but two key factors to answering it are: who is using GenAI and what they are doing with it. I created a graphic to illustrate these factors and provide some guidance.
The y-axis is a continuum from “novice to expert.” The research on expertise is clear: Most people can be an expert in only one or two areas, so for most topics, most people fall somewhere between novice and expert. For example, I am an expert in educational psychology, competent at cooking, and very much a novice when it comes to international politics. The “learning to performance” axis is the other continuum, and it captures what kind of goal a person has. The recommended ways to use GenAI differ if my goal is to learn new knowledge or skills versus to simply perform a task I already know how to do well or one that I just need to complete (e.g., how to update my smartphone’s system software), rather than learn deeply.
Let’s use a few examples to explore this graphic. As an educational psychology professor, here’s a prompt I might input into a GenAI:
I'm teaching doctoral students about different learning theories (cognitive, social, sociocultural, and situated). Develop a lesson plan for a 2.5 hr class that will engage students but also ensure that by the end of class they will know how to define, compare, and contrast these four learning theories.
I’ve made many lesson plans in my life so getting some help with this “expert/performance” task is a reasonable use of GenAI because I can (and must!) review the output and determine what parts are helpful. On the other hand, what if I were to query GenAI to synthesize the latest empirical research on the effect of educational technology on reading? I’d call that an “expert/learning” task because the GenAI would be finding and summarizing research I haven’t seen yet, and therefore before I would trust the GenAI’s output, I’d need to verify it by checking the references it provides, looking at the articles’ abstracts to make sure the GenAI summarized the main points properly, and asking myself whether the summary aligned with what I know about educational technology and reading.
The next example is something a student might prompt:
I am a high school student. I'm having trouble keeping track of all my class assignment and test dates. What are some good strategies or tools for doing this?
I would put this prompt in the “novice/performance” quadrant, because the student just needs help organizing their schoolwork. I think it is reasonable to use GenAI for this kind of “low-risk” task, particularly if there are no reliable human sources available. Of course, the student should track how useful the GenAI’s advice was, both in terms of whether it actually helps them with their schoolwork but also whether reliable, human sources agree with it.
Finally, many students use GenAI when they are struggling to understand content they need to learn for class, putting them in the “novice/learning” quadrant again. Here’s a typical prompt:
I am studying thermodynamics and I really don't understand this idea that heat is not a substance. Can you explain to me how heat works?
Regardless of how well the GenAI answered the question, I think this kind of use is very risky. Despite recent improvements in accuracy, Generative AI models still make mistakes and those errors are difficult for non-experts to spot. Learning false information is a problem because, as the research shows, misunderstandings early in learning can be difficult to change, and negatively affect future learning. In addition, there is the risk that students become dependent upon GenAI to produce answers for them, rather than using it as a tool to learn (i.e., using it as a shortcut versus cutting yourself short).
Therefore, in my figure, the “who” and “what” axes cross to provide a guide for deciding whether and how to use GenAI. An expert who is engaging in a performance task can confidently use GenAI because they know enough to spot hallucinations and they just need to get the work done. An expert can use Generative AI for learning, too, but any insights should be carefully reviewed and verified. Novices seeking help to perform a task can use GenAI, but I recommend doing so only when the consequences of getting and using bad information are low-risk. Finally, I would not recommend that novices use GenAI when they need to learn something deeply.
Thus, when people ask me if they should use GenAI, my answer starts with “What are you trying to do and how much do you know about it?”
*It is important to consider alternative, more critical views on Generative AI’s role in learning and education. Good examples of such views can be found here, here, and here.
Dr. Jeffrey A. Greene is a scholar, speaker, and consultant who helps people move from distraction to action by learning critically, inquiring humbly, and working with integrity. He is the McMichael Professor in the School of Education at the University of North Carolina at Chapel Hill.
