Algorithms vs the “human touch”

Print Friendly, PDF & Email

Algorithms vs the "human touch"

An algorithm is an unambiguous specification of how to solve a class of problems and they are used all the time in our day to day lives. An easy example to understand is how algorithms can generate advertising content based on our internet browsing habits. Someone who browses particular websites will be categorised, for example, as someone who is interested in fashion. Advertising content for that person can then be targeted. In such circumstances, it doesn’t really matter if the algorithm gets it wrong. The user in question will just get some unwanted advertisements for a while until the algorithm resets that user’s category based on more up-to-date data. However, it has recently come out in the media that algorithms are increasingly being used to make decisions about individuals which can be genuinely life changing.

Algorithms – can they be trusted?

Picture the scene. You’re on a recruitment panel. Your colleague picks up an application and immediately throws it in the bin. You ask why, and the colleague says it’s because the applicant is a woman. We would all (I hope) call out such behaviour.

However, imagine a situation where rather than a sexist colleague throwing the application in the bin, a computer algorithm determined that the same application should be unsuccessful.  You look at the application and think it looks pretty good. Would you just accept the computer’s determination, or would you question it?

Legal challenge and confirmation bias

The UK Home Office is currently facing a legal challenge over its decision to use an algorithm to filter visa applications on the basis that the algorithm is inherently biased in favour of “rich white people”. The algorithm marks visa applications as green, amber or red, on the basis of their level of risk. The Home Office has refused to disclose details of the factors that the algorithm takes into account to make these categorisations, however many worry that factors such as nationality, age or wealth could be being taken into account. Christina Blacklaws, president of the Law Society, said after visiting a visa processing centre that uses an algorithm to categorise applications that the technology “may well disadvantage certain groups of people based on generic markers such as age, country of origin or whether they have travelled before.”

The Home Office maintains that the algorithm is used only to “allocate” applications and that visa decisions are ultimately taken by a human caseworker. However, entry clearance officers are reportedly set targets of up to 60 case decisions each day. It’s no wonder therefore that MP Chi Onwurah has raised the issue of confirmation bias – where human decision-makers are inclined to approve computer-made decisions/

Confirmation bias is a terrifying concept. It means that some people are happy to agree with a computer-generated decision irrespective of their gut feeling on the matter. Sometimes it may be a case of laziness, but far more frightening is the idea that some people, who may have years and years of experience in their field of expertise, are second guessing their  own conclusions because of something that a computer, which makes decisions according to inflexible rules programmed into a system, has said. We are inclined to believe the computer because we think it is clever – but it’s only ever as clever (and as unbiased) as the person that programmed it.

The “human touch” and gut feelings

A computer can only make decisions based on the rules it has been programmed to obey. Humans don’t have to follow specific rules when making decisions. They do not have to mathematically explain how much weight they have put on different considerations before coming to a conclusion. Sometimes, as we all often say, you just get a gut feeling – and this is something which is inherently human and yet to be replicated by machines (as far as we are aware).

Of course, the inability to let a gut feeling override logical reasoning can make technology supportive of good decision-making, reduce human error and combat existing systemic biases. But issues can arise if, instead, technology begins to reinforce problematic biases, for example because of errors in algorithm design or because of biases in the underlying datasets. What if the algorithm in the recruitment example above had been programmed by the sexist colleague? Would confirmation bias lead you to agree with the output of the algorithm, notwithstanding your gut feeling regarding the quality of the application, or would you exercise the “human touch”?

Solutions?

The Centre for Data Ethics and Innovation (CDEI) has recognised the issues surrounding over-reliance on algorithms. They have acknowledged that there is a risk that any human bias in data or design will be reflected in recommendations made by algorithms. The CDEI therefore wants to ensure those using such technology can understand the potential for bias and have measures in place to address it.

The question is though, what will really win out in the end? Will an under-pressure caseworker, lawyer, doctor be more likely to take the time to analyse the recommendation of the technology that has been brought in to increase efficiencies, or rely on that recommendation at the expense of their gut feeling on the matter? Only time will tell. In the meantime, we can only hope that the designers of these algorithms appreciate the impact that their algorithms can have on the lives of individuals, and use their “human touch” accordingly. 

For more information, please contact Claire or Rebecca in our employment team. 

Recent Posts

Acuity Law Reveals Role In £1.13 Million Seed Funding For London-Based Healthtech, HealthKey
April 18, 2024
International Women's Day 2024
Playing To Our Strengths
April 9, 2024
Howe Properties (NE) Ltd v Accent Housing Ltd [2024] EWCA Civ 297: Interpretation And Applicability Of Service Charge Provisions
April 8, 2024
Vento Bands 2024
April 2, 2024
Gender Pay Gap Reporting- Are You Ready?
April 2, 2024
Right To Request Flexible Working Arrangements
April 2, 2024

Archives

Categories

Skip to content