Runaway growth of AI chatbots portends a future poised between utopia and dystopia

Short Url
Updated 18 April 2023
Follow

Runaway growth of AI chatbots portends a future poised between utopia and dystopia

  • Engineers who had been slogging away for years in academia and industry are finally having their day in the sun
  • Job displacements and social upheavals are nothing compared to the extreme risks posed by advancing AI tech

DUBAI: It was way back in the late 1980s that I first encountered the expressions “artificial intelligence,” “pattern recognition” and “image processing.” I was completing the final semester of my undergrad college studies, while also writing up my last story for the campus magazine of the Indian Institute of Technology at Kharagpur.

Never having come across these technical terms during the four years I majored in instrumentation engineering, I was surprised to discover that the smartest professors and the brightest postgrad students of the electronics and computer science and engineering departments of my own college were neck-deep in research and development work involving AI technologies. All while I was blissfully preoccupied with the latest Madonna and Billy Joel music videos and Time magazine stories about glasnost and perestroika.




Now that the genie is out, the question is whether or not Big Tech is willing or even able to address the issues raised by the runaway growth of AI. (Supplied)

More than three decades on, William Faulkner’s oft-quoted saying, “the past is never dead. It is not even past,” rings resoundingly true to me, albeit for reasons more mundane than sublime. Terms I seldom bumped into as a newspaperman and editor since leaving campus — “artificial intelligence,” “machine learning” and “robotics” — have sneaked back into my life, this time not as semantic curiosities but as man-made creations for good or ill, with the power to make me redundant.

Indeed, an entire cottage industry that did not exist just six months ago has sprung up to both feed and whet a ravenous global public appetite for information on, and insights into, ChatGPT and other AI-powered web tools.




Teachers are seen behind a laptop during a workshop on ChatGpt bot organized by the School Media Service (SEM) of the Public education of the Swiss canton of Geneva on February 1, 2023. (AFP)

The initial questions about what kind of jobs would be created and how many professions would be affected, have given way to far more profound discussions. Can conventional religions survive the challenges that will spring from artificial intelligence in due course? Will humans ever need to wrack their brains to write fiction, compose music or paint masterpieces? How long will it take before a definitive cure for cancer is found? Can public services and government functions be performed by vastly more efficient and cheaper chatbots in the future?

Even until October last year, few of us employed outside of the arcane world of AI could have anticipated an explosion of existential questions of this magnitude in our lifetime. The speed with which they have moved from the fringes of public discourse to center stage is at once a reflection of the severely disruptive nature of the developments and their potentially unsettling impact on the future of civilization. Like it or not, we are all engineers and philosophers now.




Attendees watch a demonstration on artificial intelligence during the LEAP Conference in Riyadh last February. (Supplied)

By most accounts, as yet no jobs have been eliminated and no collapse of the post-Impressionist art market has occurred as a result of the adoption of AI-powered web tools, but if the past (as well as Ernest Hemingway’s famous phrase) is any guide, change will happen at first “gradually, then suddenly.”

In any event, the world of work has been evolving almost imperceptibly but steadily since automation disrupted the settled rhythms of manufacturing and service industries that were essentially byproducts of the First Industrial Revolution.

For people of my age group, a visit to a bank today bears little resemblance to one undertaken in the 1980s and 1990s, when withdrawing cash meant standing in an orderly line first for a metal token, then waiting patiently in a different queue to receive a wad of hand-counted currency notes, each process involving the signing of multiple counterfoils and the spending of precious hours.

Although the level of efficiency likely varied from country to country, the workflow required to dispense cash to bank customers before the advent of automated teller machines was more or less the same.

Similarly, a visit to a supermarket in any modern city these days feels rather different from the experience of the late 1990s. The row upon row of checkout staff have all but disappeared, leaving behind a lean-and-mean mix with the balance tilted decidedly in favor of self-service lanes equipped with bar-code scanners, contactless credit-card readers and thermal receipt printers.

Whatever one may call these endangered jobs in retrospect, minimum-wage drudgery or decent livelihood, society seems to have accepted that there is no turning the clock back on technological advances whose benefits outweigh the costs, at least from the point of view of business owners and shareholders of banks and supermarket chains.

Likewise, with the rise of generative AI (GenAI) a new world order (or disorder) is bound to emerge, perhaps sooner rather than later, but of what kind, only time will tell.




Just 4 months since ChatGPT was launched, Open AI's conversational chat bot is now facing at least two complaints before a regulatory body in France on the use of personal data. (AFP)

In theory, ChatGPT could tell too. To this end, many a publication, including Arab News, has carried interviews with the chatbot, hoping to get the truth from the machine’s mouth, so to say, instead of relying on the thoughts and prescience of mere humans.

But the trouble with ChatGPT is that the answers it punches out depend on the “prompts” or questions it is asked. The answers will also vary with every update of its training data and the lessons it draws from these data sets’ internal patterns and relationships. Put simply, what ChatGPT or GPT-4 says about its destructive powers today is unlikely to remain unchanged a few months from now.

Meanwhile, tantalizing though the tidbits have been, the occasional interview with the CEO of OpenAI, Sam Altman, or the CEO of Google, Sundar Pichai, has shed little light on the ramifications of rapid GenAI advances for humanity.




OpenAI CEO Sam Altman, left, and Microsoft CEO Satya Nadella. (AFP)

With multibillion-dollar investments at stake and competition for market share intensifying between Silicon Valley companies, these chief executives, as also Microsoft CEO Satya Nadella, can hardly be expected to objectively answer the many burning questions, starting with whether Big Tech ought to declare “a complete global moratorium on the development of AI.”

Unfortunately for a large swathe of humanity, the great debates of the day, featuring polymaths who can talk without fear or favor about a huge range of intellectual and political trends, are raging mostly out of reach behind strict paywalls of publications such as Bloomberg, Wall Street Journal, Financial Times, and Time.

An essay by Niall Ferguson, the pre-eminent historian of the ideas that define our time, published in Bloomberg on April 9, offers a peek into the deepest worries of philosophers and futurists, implying that the fears of large-scale job displacements and social upheavals are nothing compared to the extreme risks posed by galloping AI advancements.

“Most AI does things that offer benefits not threats to humanity … The debate we are having today is about a particular branch of AI: the large language models (LLMs) produced by organizations such as OpenAI, notably ChatGPT and its more powerful successor GPT-4,” Ferguson wrote before going on to unpack the downsides.

In sum, he said: “The more I read about GPT-4, the more I think we are talking here not about artificial intelligence … but inhuman intelligence, which we have designed and trained to sound convincingly like us. … How might AI off us? Not by producing (Arnold) Schwarzenegger-like killer androids (of the 1984 film “The Terminator”), but merely by using its power to mimic us in order to drive us insane and collectively into civil war.”

Intellectually ready or not, behemoths such as Microsoft, Google and Meta, together with not-so-well-known startups like Adept AI Labs, Anthropic, Cohere and Stable Diffusion API, have had greatness thrust upon them by virtue of having developed their own LLMs with the aid of advances in computational power and mathematical techniques that have made it possible to train AI on ever larger data sets than before.

Just like in Hindu mythology, where Shiva, as the Lord of Dance Nataraja, takes on the persona of a creator, protector and destroyer, in the real world tech giants and startups (answerable primarily to profit-seeking shareholders and venture capitalists) find themselves playing what many regard as the combined role of creator, protector and potential destroyer of human civilization.




Microsoft is the “exclusive” provider of cloud computing services to OpenAI, the developer of ChatGPT. (AFP file)

While it does seem that a science-fiction future is closer than ever before, no technology exists as of now to turn back time to 1992 and enable me to switch from instrumentation engineering to computer science instead of a vulnerable occupation like journalism. Jokes aside, it would be disingenuous of me to claim that I have not been pondering the “what-if” scenarios of late.

Not because I am terrified of being replaced by an AI-powered chatbot in the near future and compelled to sign up for retraining as a food-delivery driver. Journalists are certainly better psychologically prepared for such a drastic reversal of fortune than the bankers and property owners in Thailand who overnight had to learn to sell food on the footpaths of Bangkok to make a living in the aftermath of the 1997 Asian financial crisis.

The regret I have is more philosophical than material: We are living in a time when engineers who had been slogging away for years in the forgotten groves of academe and industry, pushing the boundaries of AI and machine learning one autocorrect code at a time, are finally getting their due as the true masters of the universe. It would have felt good to be one of them, no matter how relatively insignificant one’s individual contribution.

There is a vicarious thrill, though, in tracking the achievements of a man by the name of P. Sundarajan, who won admission to my alma mater to study metallurgical engineering one year after I graduated.




Google Inc. CEO Sundar Pichai (C) is applauded as he arrives to address students during a forum at The Indian Institute of Technology in Kharagpur, India, on January 5, 2017. (AFP file)

Now 50 years old, he has a big responsibility in shaping the GenAI landscape, although he probably had no inkling of what fate had in store for him when he was focused on his electronic materials project in the final year of his undergrad studies. That person is none other than Sundar Pichai, whose path to the office of Google CEO went via IIT Kharagpur, Stanford University and Wharton business school.

Now, just as in the final semester of my engineering studies, I have no illusions about the exceptionally high IQ required to be even a writer of code for sophisticated computer programs. In an age of increasing specialization, “horses for courses” is not only a rational approach, it is practically the only game in town.

I am perfectly content with the knowledge that in the pre-digital 1980s, well before the internet as we know it had even been created, I had got a glimpse of the distant exciting future while reporting on “artificial intelligence,” “pattern recognition” and “image processing.” Only now do I fully appreciate how great a privilege it was.

 


EU bans 4 more Russian media outlets from broadcasting in the bloc, citing disinformation

Updated 18 May 2024
Follow

EU bans 4 more Russian media outlets from broadcasting in the bloc, citing disinformation

  • The EU has already suspended Russia Today and Sputnik among several other outlets since February 2022

BRUSSELS: The European Union on Friday banned four more Russian media outlets from broadcasting in the 27-nation bloc for what it calls the spread of propaganda about the invasion of Ukraine and disinformation as the EU heads into parliamentary elections in three weeks.
The latest batch of broadcasters consists of Voice of Europe, RIA Novosti, Izvestia and Rossiyskaya Gazeta, which the EU claims are all under control of the Kremlin. It said in a statement that the four are in particular targeting “European political parties, especially during election periods.”
Belgium already last month opened an investigation into suspected Russian interference in June’s Europe-wide elections, saying its country’s intelligence service has confirmed the existence of a network trying to undermine support for Ukraine.
The Czech government has imposed sanctions on a number of people after a pro-Russian influence operation was uncovered there. They are alleged to have approached members of the European Parliament and offered them money to promote Russian propaganda.
Since the war started in February 2022, the EU has already suspended Russia Today and Sputnik among several other outlets.

 

 


Israeli soldiers post abusive videos despite army’s pledge to act: BBC analysis

Updated 17 May 2024
Follow

Israeli soldiers post abusive videos despite army’s pledge to act: BBC analysis

  • The BBC analyzed 45 photos and videos posted online by Israeli soldiers that showed Palestinian prisoners in the West Bank being abused and humiliated

LONDON: Israeli soldiers continue to post videos of abuse against Palestinian detainees despite a military pledge to take action against the perpetrators, analysis by the BBC has found.

The broadcaster said it had analyzed 45 photos and videos posted online by Israeli soldiers that showed Palestinian prisoners in the West Bank being abused and humiliated. Some were draped in Israeli flags. 

Experts say the footage and images, which showed Palestinians being stripped, beaten and blindfolded, could breach international law and amount to a war crime.

The Israel Defense Forces said some soldiers had been disciplined or suspended for “unacceptable behavior” but did not comment on the individual cases identified by the BBC.

The most recent investigation into social media misconduct by Israeli soldiers follows a previous inquiry in which BBC Verify confirmed Israeli soldiers had filmed Gazan detainees while beating them and then posted the material on social platforms.

The Israeli military has carried out arbitrary arrests across Gaza and the West Bank, including East Jerusalem, since the Hamas attack on Oct. 7. The number of Palestinian prisoners in the West Bank has since risen to more than 7,060 according to the Commission of Detainees’ Affairs and the Palestinian Prisoner Society.

Ori Givati, spokesperson for Breaking the Silence, a non-governmental organization for Israeli veterans working to expose wrongdoing in the IDF, told the BBC he was “far from shocked” to hear the misconduct was ongoing.

Blaming “current far-right political rhetoric in the country” for further encouraging the abuse, he added: “There are no repercussions. They [Israeli soldiers] get encouraged and supported by the highest ministers of the government.”

He said this played into a mindset already subscribed to by the military: “The culture in the military, when it comes to Palestinians, is that they are only targets. They are not human beings. This is how the military teaches you to behave.”

The BBC’s analysis found that the videos and photos it examined were posted by 11 soldiers of the Kfir Brigade, the largest infantry brigade in the IDF. None of them hid their identity.

The IDF did not respond when the BBC asked about the actions of the individual soldiers and whether they had been disciplined.

The BBC also attempted to contact the soldiers on social media. The organization was blocked by one, while none of the others responded.

Mark Ellis, executive director of the International Bar Association, urged an investigation into the incidents shown in the footage and called for the IDF to discipline those involved.

In response to the BBC’s investigation, the IDF said: “The IDF holds its soldiers to a professional standard … and investigates when behavior is not in line with the IDF’s values. In the event of unacceptable behavior, soldiers were disciplined and even suspended from reserve duty.

“Additionally, soldiers are instructed to avoid uploading footage of operational activities to social media networks.”

However, it did not acknowledge its pledge to act on BBC Verify’s earlier findings in Gaza, according to the broadcaster.


4 journalists killed in Gaza as death toll climbs above 100

Updated 17 May 2024
Follow

4 journalists killed in Gaza as death toll climbs above 100

  • 104 Palestinian media workers reported dead, along with 3 Lebanese and 2 Israelis

LONDON: The Gaza Media Authority on Thursday said that four journalists had been killed in an Israeli airstrike, bringing the total number of journalists killed in the conflict to more than 100.

The victims were identified as Hail Al-Najjar, a video editor at the Al-Aqsa Media Network; Mahmoud Jahjouh, a photojournalist at the Palestine Post website; Moath Mustafa Al-Ghefari, a photojournalist at the Kanaan Land website and Palestinian Media Foundation; and Amina Mahmoud Hameed, a program presenter and editor at several media outlets, according to the Anadolu Agency.

The Gaza Media Office said the four were killed in an Israeli airstrike, but did not provide additional details on the circumstances surrounding their deaths.

A total of 104 Palestinian journalists have been killed since the conflict began on Oct. 7. Two Israeli and three Lebanese media workers also have been killed.

The latest loss adds to the already heavy toll on media workers, with the Committee to Protect Journalists saying the Gaza conflict is the deadliest for journalists and media workers since it began keeping records.

Israel is continuing its offensive on Gaza despite a UN Security Council resolution demanding an immediate ceasefire.

On Thursday, South Africa, which has brought a case accusing Israel of genocide to the International Court of Justice, urged the court to order Israel to halt its assault on Rafah.

According to Gaza medical authorities, more than 35,200 Palestinians have been killed, mostly women and children, and over 79,200 have been injured since early October when Israel launched its offensive following an attack by Hamas.


Russia outlaws SOTA opposition news outlet

Updated 17 May 2024
Follow

Russia outlaws SOTA opposition news outlet

  • Authorities said outlet tries to destabilize the socio-political situation in Russia
  • Move could criminalize SOTA content and puts its reporters at risk of arrest

LONDON: Russia declared opposition media outlet SOTA “undesirable” on Thursday, a move that could criminalize the sharing of its content and put its reporters at risk of arrest.
Authorities in Russia have declared dozens of news outlets, think tanks and non-profit organizations “undesirable” since 2015, a label rights groups say is designed to deter dissent.
In a statement, Russia’s Prosecutor General accused SOTA of “frank attempts to destabilize the socio-political situation in Russia” and “create tension and irritation in society.”
“Such activities, obviously encouraged by so-called Western inspirers, have the goal of undermining the spiritual and moral foundations of Russian society,” it said.
It also accused SOTA of co-operating with TV Rain and The Insider, two other independent Russian-language outlets based outside of the country that are linked to the opposition.
SOTA Project, which covers opposition protests and has been fiercely critical of the Kremlin, denied it had anything to do with TV Rain and The Insider and rejected the claims.
But it advised its followers in Russia to “remove reposts and links” to its materials to avoid the risk of prosecution. SOTA’s Telegram channel has around 137,000 subscribers.
“Law enforcement and courts consider publishing online to be a continuing offense. This means that you can be prosecuted for reposts from 2023, 2022, 2021,” it said.
SOTA Project was born out of a split with a separate news outlet called SOTAvision, which still covers the opposition but distanced itself from the prosecutors’ ruling on Thursday.
Since launching its offensive in Ukraine, Moscow has waged an unprecedented crackdown on dissent that rights groups have likened to Soviet-era mass repression.
Among other organizations labelled as “undesirable” in Russia are the World Wildlife Fund, Greenpeace, Transparency International and Radio Free Europe/Radio Liberty.


OpenAI strikes deal to bring Reddit content to ChatGPT

Updated 17 May 2024
Follow

OpenAI strikes deal to bring Reddit content to ChatGPT

  • Deal underscores Reddit’s attempt to diversify beyond its advertising business
  • Content will be used to train AI models

LONDON: Reddit has partnered with OpenAI to bring its content to popular chatbot ChatGPT, the companies said on Thursday, sending the social media platform’s shares up 12 percent in extended trade.
The deal underscores Reddit’s attempt to diversify beyond its advertising business, and follows its recent partnership with Alphabet to make its content available for training Google’s AI models.
ChatGPT and other OpenAI products will use Reddit’s application programming interface, the means by which Reddit distributes its content, following the new partnership.
OpenAI will also become a Reddit advertising partner, the company said.
Ahead of Reddit’s March IPO, Reuters reported that Reddit struck its deal with Alphabet, worth about $60 million per year.
Investors view selling its data to train AI models as a key source of revenue beyond Reddit’s advertising business.
The social media company earlier this month reported strong revenue growth and improving profitability in the first earnings since its market debut, indicating that its Google deal and its push to grow its ads business were paying off.
Reddit’s shares rose 10.5 percent to $62.31 after the bell. As of Wednesday’s close, the stock is up nearly 12 percent since its market debut in March.