The Rule Unmaker
Philosopher Barry Lam endorses only one rule: We should have fewer rules.
By Devon Frye published January 7, 2025 - last reviewed on January 16, 2025

As far back as he can remember, Barry Lam had a defiant disposition: Whenever anyone told him to do something, he felt a strong urge to do the opposite. As a teen, he blew off his studies and rebelled against his school’s administration; pushed by his immigrant family to study computer science or medicine, he decided to become “Mr. Humanities” and majored in philosophy instead. Now a professor at the University of California, Riverside, and the host of the podcast Hi-Phi Nation, Lam still finds himself challenging rules—only now, he’s questioning the idea that we need so many at all. In his new book Fewer Rules, Better People, he argues that the numerous rules that aim to improve our behavior often do the opposite—and that as arbitrary and biased as human decision-making can be, it still beats the alternative.
You’ve been called a public philosopher. What does that mean to you?
I try to draw philosophy out of things that are happening in the world. As public philosophers, we find issues of interest to people, we dig into them, and we find out what the philosophical questions are. I was a specialist for a long time; my earliest work was in epistemology, the theory of knowledge. But public philosophy can be anything; some of us study games, for instance, and lately I’ve been gravitating toward the philosophy of criminal justice.

Your new book argues that too many rules make us worse, not better, people. So why do we make so many rules?
I have to give rules their due: There are good reasons why they exist. One is our desire for fairness—that everyone be judged and treated the same. When someone passes judgment on you but gets it wrong, it feels like a great injustice, and we feel a strong normative pull to fix it. Take our criminal justice system: There really is a lot of bias there. If some judges are biased when deciding whether someone goes to jail pretrial or gets released, for example, creating a rule that automates that decision is a way to respond to injustice.
So what’s the darker side of all those rules?
Mistrust, going every which way. On the one hand, authorities often have a very bleak view of the people they have authority over. They see them as constantly trying to cheat the system, so they create complex rules to counter that. Then there’s mistrust coming back from the people: These evil officials are always trying to screw me—they need the rules so they can’t treat me unfairly. When we don’t trust each other, we create more rules; then people find loopholes, so we create even more. The incentives only move in one direction, toward more and more complex rules. Things get increasingly Byzantine until you can’t rule on a specific situation without consulting a book that’s five inches thick.
What other problems do excessive rules create?
When someone is only afraid of violating the rules, they don’t exercise any agency. Not only is that a hollow existence, but often, that inability to exercise judgment ends up going against the spirit of the rule. Take teaching credentials. The spirit of those rules is that we don’t want unqualified people teaching children. Completely legitimate, right? But if the rules say that the application needs an ink-dried signature, and they can’t quite tell if yours is ink-dried, they say no. If they were allowed to use common sense, they might see that this person isn’t going to do a disservice to children simply because there’s no ink. But “rules are rules.” Doctors, teachers, anyone trying to follow their company’s reimbursement procedures—they’ve probably experienced this kind of thing. That’s the world we’ve created for ourselves.
What if we just turned decision-making over to AI?
I suspect that human decision-making, when it’s properly educated, will always be better than automated. But let’s say hypothetically, the result is exactly the same. Does human judgment still have value? I argue the answer is yes. It’s the same value as having a real person cook your dinner instead of eating industrialized food products. Part of what it is to be human is to be a creative actor, to intervene in the world with your hands, your mind. A world in which an individual has no human beings pass judgment on them, just algorithms—we haven’t realized yet just how hollow that is.
You don’t want us to ditch rules entirely. So what are you proposing?
I’m not a burn-down-the-system person—at least not anymore. All I’m suggesting is that we create more ways for people to exercise discretion, and if they do it well, let them exercise more. Legalism treats everyone as if they’re immoral. I’m not saying we should pretend everyone is all good, either; just that there’s a middle way.
Your podcast combines narrative journalism with philosophical questions. How do those approaches work together?
Economics was, for a long time, a boring high school subject that nobody was interested in. But economists have done a great job connecting economics to issues that you might not think were related, and as a society we’ve become used to looking at things through an economic lens. I’m trying to do the same with philosophy, and I feel that stories help people get a better grip on its abstract questions. Hearing directly from a man in love with his AI chatbot—so much so that he’s decided to be in an exclusive relationship with her—opens up so many questions: Do these relationships qualify as love? The people involved certainly think so, but for that episode, I asked philosophers who study the nature of love, too. Do companies that make these chatbots have a moral obligation not to delete them? It’s not sentient, and it’s not a living being—but it’s not just a piece of software, either. There are philosophical questions hidden under so many things. I think it’s time we did a better job helping people see that.