What Are Faultlines and Why Are They a Big Deal for Teams?

And what does this have to do with AI?

Posted Mar 20, 2019

What does research tell us about what AI means for different types of teams? Since ‘different types’ can mean lots of things, we focus on types of teams according to how homogeneous they are — whether the teams are made of distinct subgroups or whether everyone on the team has the same background and demographics. If a team has subgroups, we say these subgroups are separated by a faultline (which confuses team researchers with geologists but what can you do, it’s a nice term). For example, a project team would have a faultline when all the white team members are men under 25 years old and all the black members are female and over 40 (the attributes correlated here are race, age, and gender). Another example, common in so many workplaces today, is where all the younger employees are also self-employed under the category of ‘independent contractors’ or consultants and the other, older employees are full-time.

Here is where AI comes in. By the way, it’s harder to find an agreed-upon definition for AI than you might think; we think of the "tools" of AI as the hardware and software, where the software is “trained” on vast sets of data, rather than programmed with specific rules. AI is already changing how a lot of people work: It's already been applied to colorizing movies, fraud detection, generating marketing leads, robotic surgery, and language translation services. Yet very little has been written or researched about how AI and automated technologies might affect teamwork, especially so for people who do not have full-time jobs (we are thinking about you, Uber/Lyft drivers and others with part-time gigs).

The role of faultlines could potentially be important when there is a difference in access to technology, like when the organization owns the AI (like proprietary software) meaning full-time employees have access, but other members of the team do not. Such imbalances can put subteams into conflict, dividing a group along faultlines. That could be a big deal for teams, since Dora Lau and Keith Murnighan’s original faultline concept has been shown, over the past two decades, to predict all sorts of performance, health, and other aspects of teams. The danger is that divide between those who have resources (e.g., AI) and those who do not increases perceptions of inequality, leading to competition between subgroups. The effects on individual team members then, could stem partly from their positions in teams and who owns tools, such as when AI is owned/controlled by the ‘outsiders,’ (say a consulting team brings in their own software or technology) but this may create a more level playing field allowing the self-employed to counteract the power traditionally held by the organization. In this case, contingent workers have more leverage in the team. In addition to who has expertise and who has ownership of AI, another factor is the work itself; to the extent that the work is interdependent and requires a team, the less likely technology will displace self-employed workers.

Based on the “who has the knowledge, who owns the tools” idea, AI will be more likely to put contingent workers at high risk if an organization controls the AI (e.g., a shift to driverless car technology where the self-employed are currently the vehicle operators). But if the contingent workers are in the role of experts with full-time employees of an organization there is less risk to them (“We brought you guys in here to show us how to run this thing!”). There is also some risk to contingent employees when the organization owns the AI, since it can potentially train its full-time employees to use it, thus replacing contingent workers. All of these things have happened in one workplace or another.

Let’s face it: Predicting any pattern behind these scenarios and relationships is speculation because little systematic research has been done. But there is no speculation that AI is expanding, and many people fear that jobs may be at risk due to AI or some variant, if not now, then sometime in the future. Returning to the Luddites, historians tell us they were not protesting the machines themselves, they were in fact skilled at operating them. Their beef was more about changes to their work hours and conditions. And this might give us a clue as to how to understand AI and its implications. We know that part-time and contingent employment is found in all major industries and represented by every level of education. If you haven’t felt directly affected by AI advances, or know someone who’s job has been affected you likely will soon enough. So how should people prepare for this? That’s a topic we’ll cover in a future post.

Written by Chester Spell and Katerina Bezrukova