Skip to main content

Verified by Psychology Today

Poking Fun at AI

Janelle Shane’s exercises in algorithm-training yield hilarious results.

Jess Kornacki, used with permission
Jess Kornacki, used with permission

Much of the chatter about artificial intelligence focuses on the feats it can accomplish and the dangers it could pose. Janelle Shane’s experiments with widely available AI tools reveal a broader picture—including the wrong, the wild, and the absurd solutions AI often generates. As chronicled on her blog, AI Weirdness, and in her new book, You Look Like a Thing and I Love You, Shane commands algorithms to absorb sets of human-generated information—like band names, recipes, or Buzzfeed headlines—and invent new examples. Hence boat names that range from spot on (Sun Princess) to way off (Snot Runner), and the title of her book, an AI’s idea of a pick-up line.

In your writing, AIs sometimes seem like clumsy children who really want to help.

It’s super fun to anthropomorphize them in ways that we definitely shouldn’t. I mean, I’m the one you’ll see cheering on a Roomba as it’s skating around the room: “Oh no, don’t go under there! Not the stairs!” There’s fun in consciously doing that, even though there’s nobody home. With artificial intelligence, though, we have to be more careful. People don’t always understand that these algorithms are much more like Roombas than like the humanlike AI in movies and may assume they’re capable of common sense and morality.

Do we enjoy watching artificial intelligence fail as much as we do watching it succeed?

I think so. We have so many science fiction stories about the super-competent robot, and there’s this worry that artificial intelligence is mysteriously controlling a lot of what we’re doing. I think funny examples—like the AI that came up with a paint color called Stanky Bean because it didn’t understand why it shouldn’t—can illustrate how little today’s AI really comprehends.

Why is it useful to draw attention to these limitations?

I think it’s important for helping people make decisions about what AI should be allowed to do, what we have to watch and regulate, which claims we should believe or be skeptical of. Recently, a politician said that algorithms can’t be biased because they’re based on math. People need a clearer picture of AI in their minds.

What don’t people fully understand about algorithmic bias?

An AI that works by trying to copy or predict human behavior doesn’t know that we want it to avoid the biased part. Any time there is racial bias or gender bias in the behavior an AI is copying, it’s going to copy that as closely as it can. Amazon decided not to use a résumé-sorting tool that they had been working on because they discovered that it had learned how to select against résumés from female applicants. Any time you release an algorithm that’s going to make decisions, you have to systematically check it for bias.

Do the shortcuts AIs sometimes take resemble how our minds work?

People do take sneaky shortcuts or arrive at a result in ways other than what was intended. Even dolphins will hack their reward functions. A dolphin trained to pick up trash from its tank can discover pretty quickly that the exchange rate is one fish for one piece of trash. It will figure out that it can hide some trash in a corner of the tank and break off pieces, each one of which will be worth a fish all by itself.

Has working with AI changed your view of human abilities?

It gives me an appreciation for the complicated stuff people do without realizing it. Think of image recognition: You can recognize a cat even when it’s curled up in a ball and you can see only part of one ear. There is all this information that humans use to do the simplest tasks, and we run up against that when we try to train algorithms to do them.

You sometimes play with a dimension of AI response that you call “creativity.” How does that work?

With an artificial neural network, if you turn up the creativity level, then you are telling it: “You’re predicting what letter comes next in this word or sentence, and you could go with your top choice every time—or you could go further down the list of possibilities.” You can get some much weirder output that way. Human creativity is kind of like that. There is a tension between what’s safe and what’s out there. How far out there can you go before people don’t get it anymore?

You play the Irish flute. Can you imagine playing AI-composed music?

I have done so, actually. Multiple people have used neural networks to generate Irish tunes. A human has to add in the ornamentation and the pulse and all these things that make it sound like Irish music, but once you do that, it can pass for human-written.

What do you do in your day job as a scientist in the field of optics?

I work at a company that develops light-shaping technology. Programmable holograms are part of what we do: That’s basically shaping one beam of light into hundreds of independently steerable beams of light.

And you have used it to help study the brain.

I’m helping researchers zap neurons that are genetically engineered to fire when they’ve been exposed to light. With steerable light beams, you can test theories about how different groups of neurons are connected. Our collaborators observed which neurons were active when a mouse viewed vertical stripes on a screen. Then they were able to reactivate some of those neurons—and the mouse behaved as if it were seeing vertical stripes.

Your blog originally featured microscope samples that looked like vivid landscapes. Why share them?

In my research group at that time, you paid for a slot to use the electron microscope, and it was the same cost whether you spent the whole time taking data or discovered that your sample was no good. The blog gave me something to look forward to: I could say, well, if my sample is destroyed, maybe it’ll at least look cool.