Can AI ever replace the human touch?

Can AI ever replace the human touch?

Author
Short Url

On numerous occasions I’ve tried to report an issue or phone my bank to request information, and find myself dealing with a “bot,” an artificial intelligence — in other words a computer program powered by an algorithm to “understand” what I’m asking and provide an appropriate response.

And this technology is improving all the time; microphones detect our voices, software understands what we say regardless of regional accent, then selects an appropriate response in a recorded voice with realistic inflections.

Nevertheless, to be honest, my relief is palpable when I call a large company and a human being answers — a real, live human being who can (hopefully) understand my dialect and accent, filter out the excess wordage and story-telling, and address the precise matter at hand.

I am not demeaning the technology. It is a wonderful advance, it speeds up queues, it cuts phone waiting times, and there are countless other advantages. But what are the limits of the technology? More importantly, where should we — indeed where can we — draw the line?

AI, if programmed with perfect algorithms, can provide a more efficient response than a human being if it’s asked the correct questions. It can be free of human bias, error and emotion, making decisions based on logic and fact, not how a person feels that day. In theory it should be fairer and more accurate. However, an algorithm is only as good as its programmer and the information provided.

AI could in theory be used for triage in hospital, helping to make swift, accurate decisions on patients’ symptoms and risks. Perhaps it could be used to give initial, basic analysis and diagnosis of medical ailments to reduce the strain on doctor’s surgeries, referring the patient to the correct therapy or hospital department, or to the doctor if a second opinion was required for something potentially hazardous. The AI could even make appointments.

There may be lots of reasons why you might require a dispensation from dealing with AI. Some people need more time to be able to process questions, or process their answer into a coherent response. A person could lack concise descriptive vocabulary; would an AI know how to extract the relevant information? Some people get confused with language and information and need someone to ask the right questions to extract the correct information. Neurodivergent people can struggle with processing thoughts to be able to convey information, or may simply take a question too literally, which may not be picked up by the AI in a way that a human might. A person with dyslexia may struggle with a computer interface, as may many others, depending on physical, mental and learning abilities, and experience.

AI, if programmed with perfect algorithms, can provide a more efficient response than a human being if it’s asked the correct questions.

Bashayer Al-Majed

 

These issues are not limited to medical situations. They could equally be applicable for an application for social housing, or a patent application for intellectual property, or marking a university examination.

The important questions are: Is AI accurate enough, particularly for matters of life and death?

And who is liable if something goes wrong?

For example, in matters of scientific discoveries and intellectual property, with such huge online access to libraries of global patents it would be fast and efficient for an AI to search for something already in the market. Indeed, with advanced search engines and alerts, this already happens. But the issue arises of who is liable if a patent is granted but it later turns out that a similar item also exists. Perhaps the software lacked the language to search effectively, but is that the programmer’s fault?  Is it a matter of how the search is carried out? Presumably it would require someone familiar with the specific topic to work with the programmers to ensure everything was covered. Essentially, it seems that the AI itself would be best placed to search for potential biases to eliminate discrimination.

This feels an inevitable move forward; it’s just a matter of time, technology, interface and having the appropriate conversations to put in place the right kind of policies to protect the public. In the long run, it would greatly improve access to services if AI could at the very least be used for initial screening processes to provide simple answers and aid where appropriate, freeing up experts to deal with more complicated situations. In areas of housing, medicine and the law, eliminating long backlogs in less complicated cases would be excellent, but currently the legal protection is not in place, so we would have to ask, at what risk?

Bureaucracy can be a nightmare; we have all endured being sent round in endless circles. Sometimes you just need to speak with a human being, and we need to greatly improve the ease with which we can do so when required.

Dr. Bashayer Al-Majed is a professor of law at Kuwait University and visiting fellow at Oxford. Twitter: @BashayerAlMajed

Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News' point-of-view