Skip to main content

Verified by Psychology Today

Career

Biased Algorithms?

Find out how they work and how to beat the system.

boingboing media

HAVE you ever been rejected from a university? Ever had a job, credit card, mortgage, rental, travel visa, scholarship, or study permit application turned down without explanation? Have you ever been detained at airport security or immigration, or been denied boarding? Have you ever spent hours on the phone aching through automated menus, only to speak to underpaid, overworked, flustered attendants who do not understand your language, cannot spell your name right, and tell you there is nothing they can do?

A Case in Point.

I was recently reminded of how alienating this process can be for all the people who are neither rich nor citizens of privileged countries – that is to say, for the vast majority of people alive on our planet.

Bear with me through the mind-numbingly uninteresting, rage-inducing case of a relative of mine: a citizen of Brazil who was denied boarding by the low-cost company Air Europa on a flight from Italy back to her own country. This sad story, boring as it might be, will resonate with all of us, and provide a case in point to examine the problem and a solution.

Before being made aware of the issue, I had made the Air Europa booking online for my relative using my Canadian credit card. Having learned to be cautious about inflexible deals through predatory online travel agents like Orbitz or Expedia, I had made the booking directly with the airline website. Days after the itinerary had been issued, I was informed via an automated email that I would be required to email copies of my card, ID, and signed authorization to confirm the booking. So far so good.

As paying customers in an age of virtual transactions, we should welcome checks and balances put in place to prevent fraudulent activities on our accounts. After duly sending in the forms and spending a very long time on the phone to try to obtain a confirmation, I was then asked to resubmit the forms once, then a second time. First, the card signature wasn’t legible on the first form. The second time around, I was now informed that the authorization (accepted in the previous email) should be filled out by hand, then scanned, then emailed. I dutifully complied. By then, I had squandered over a half a business day on this opaque procedure. I wondered how someone with limited or no English and no access to email, a printer, and a scanner would fare in this situation.

When I finally succeeded in speaking to a human from Air Europa, I was told that all documents had been received, that the flight had been purchased and booked, but that the passenger would not be allowed to travel unless I (as the credit card holder) showed up in person at Milan Airport (recall that I was phoning from Canada) to present my card in person. This was maddeningly stupid. I offered to email copies of my driver’s license, medical and staff ID cards, proofs of address - whatever the company needed to confirm my identity. I was writing to them back and forth from my university email. Another hour on the phone (mostly on hold) ensued, as I unsuccessfully asked to be transferred to a supervisor. The flustered attendant confirmed in the end that nothing could be done.

“Surely”, I inquired, trying my best to remain polite with the flustered attendant who was only relaying opaque orders, “as an airline company, you must deal with hundreds of such cases every day : surely (I continued), in 2017, many tickets are purchased online by a third party who doesn’t reside in the country of travel. I have filled out all your forms and sent you all the required documents. I have paid money for a service. There is no valid reason for this vetting process to continue at this stage!”

“I understand and I do apologize, sir”, the flustered attendant replied, “but this block is coming from our fraud department, and we cannot override it. When passengers from…er…certain countries…have their tickets purchased from North America, this comes up at a high risk for fraud”.

I had no choice at that stage but to accept a refund. In the mean time, my Brazilian relative awaited in Italy, a mere day before the expiration of her tourist visa, facing deportation and a tarnished immigration record.

Understanding the Problem.

common sense media
A dystopian future
Source: common sense media

WHEN the previous generations imagined a dystopian future controlled by robots, their vision was probably one of a post-nuclear apocalypse ruled by humanoid-looking mechanical soldiers. Little did they know that ours would be an infinitely bleaker, more opaque – but equally divisive – future as that imagined by the writers of Terminator 2.

In 2017, police forces do look increasingly like Robocop, and lethal drone strikes in Afghan mountain caves can be unleashed at the click of a mouse from a strip-mall office in Arizona. But the robots that control most of what we can do and where we can go are silent computer programs that most of us will never see, understand, or influence in any away.

These algorithms are responsible for most of the decisions (from travel, credit card, mortgages, schools, and job applications) made on our behalf in modern life. They also control what we see and who we associate with in our social media newsfeeds, and are now widely acknowledged as influencing us much beyond our consumption habits into the deeply personal realm of our political decisions, and even dating and mate choices.

Cathy O’Neil, a former math professor and data scientist turned activist has denounced these algorithms as Weapons of Math Destruction (WMDs). After working as a hedge fund statistician in the financial industry and witnessing the disastrous economic effect of bad algorithmic predictions, O’Neil has spearheaded a campaign that seeks to educate the broader public on the flaws of WMDs while encouraging us to demand transparency and accountability from the governments and corporations that rely on flawed algorithms to manage our lives.

O’Neil points out that efficiency-improving algorithms often turn into Weapons of Math Destructions through two fundamental design flaws:

1) The predictions they generate based on prior data (e.g.. statistical likelihood of someone repaying a loan) are not impartial. Risks in these models are defined according to preset criteria (e.g., living in a certain neighborhood) that are not free of moral and cultural assumptions. These assumptions often tend to reflect, and thereby reinforce the existing status quo (e.g., poverty and systemic racism).

2) Governments and companies are almost never transparent about their algorithms. They do not divulge the criteria and method used to generate predictions, and do not provide detailed reports to citizens and customers. They are in no way accountable for the consequences of how opaque datasets generated about our every trait and move are used to control what we can see and do.

In her recent book on WMDs, O’Neil recounts one teacher’s harrowing battle to demand an explanation for her dismissal from a DC public school after a decision made from a computer-generated evaluation score. “It’s an algorithm, and it’s complicated”, the teacher kept being told.

But the problem, rather, is that these algorithms are most often too simple.

When algorithms do not take into account (let alone seek to address!) the many socio-economic, poverty-related issues that contribute to consistently low test scores in poor school districts, for example, and when these test scores are used as the basis for dismissing teachers, WMDs are reinforcing, not fixing a problem. The same systemic problem underpins the infamous use of predictive-policing software to allocate police resources in poor neighborhoods in certain cities.

The e-scores generated by the invisible third-party companies who sell our browsing data (including personal email and messaging, medical and psychological information, etc.) to predatory schools, insurance, and business of all kinds are similarly opaque.

Demanding transparency.

Some systems, however, have made an effort to provide transparency. FICO credit scores may be problematic when they are used as a moral measure to determine a person's worth, or their eligibility for jobs, housing, or even dating, but they do provide each of us with a copy of our report and an explanation for its score. Discrepancies can be contested, and eventually (but very slowly) altered. The system may not be fair by some standards, but it is at least transparent and contestable.

It is time for other companies to follow the FICO model. Why don’t Google, Facebook, Netflix, and Amazon provide us with detailed reports on all our e-scores, including a breakdown of how they are measured, what they are used for, and whom they are sold or given to?

Did you ever ask yourself the following questions:

Would you like to know how your employer generates an employee profile based on your browsing history? Do data generated from Gmail texts, for example, churn out a “hypochondriac" or "chronically ill" score about people who complain about their health, which is then sold to pharmaceutical company? Does Tinder put people in categories like “this one prefers 25 year-old blondes with professional jobs and artsy pictures, but is more likely to get replies from 35 year-old brunettes without an Instagram account and a college degree?”

Would you like to know your hypochondria and dating potential score? How about your fraud-risk and terrorist scores?

Let’s keep asking questions:

Do Expedia and Kayak generate a terrorism risk for customers? What criteria would an algorithm use to make such claims? Would an American-born professor named Mohammed who travels to Istanbul three times a year be considered a risk? Would he be allowed to see a full e-score report, along with the criteria used to generate the score?

In the case of Air Europa and my stranded Brazilian relative, it isn’t too hard to guess the method.

A Brazilian passport increases the “risk” that a customer may be poor. A final destination airport code in Northern Brazil further increases the “risk” that the customer may be poor and black. A poor and black person from northern Brazil has a low statistical likelihood of having a relative from Canada (a country in which there aren’t many Brazilian immigrants) pay for their ticket. The transaction must therefore be fraudulent. Signed forms and documents, so the algorithmic reasoning goes, can also be faked. Better to stick with low-risk in such a case and deny the transaction. Data, it they were available to the public, would surely show that [presumably poor and black] passengers who reside in North East Brazil also have a very low statistical likelihood to write a customer satisfaction complaint that will result in any cost to the company.

It is at present impossible to determine whether my analysis is correct. Impossible, that is, until companies like Air Europa make their "risk" analytics fully transparent.

The way forward for all of us lies in demanding transparency and accountability. Only then will we have a say in the criteria used to measure and rank us as high or low risks in costing companies money, and high or low potentials in generating them money. Once that too becomes apparent, another way forward will be possible.

When we are done uncovering the racist and classist criteria used by WMDs to perpetuate stereotypes, limit our mobility, and reinforce systemic inequalities on a global scale, it will be up to us to demand a better, fairer system — one, we can only hope, that is not driven by an inhumane logic of profit-making the for the very few.

Do we have a long way to go? Maybe not.

Please share this post, and reclaim the use of internet algorithms to generate change from the people, for the people.

advertisement
More from Samuel Paul Veissière Ph.D.
More from Psychology Today