Runaway growth of AI chatbots portends a future poised between utopia and dystopia

Short Url
Updated 18 April 2023
Follow

Runaway growth of AI chatbots portends a future poised between utopia and dystopia

  • Engineers who had been slogging away for years in academia and industry are finally having their day in the sun
  • Job displacements and social upheavals are nothing compared to the extreme risks posed by advancing AI tech

DUBAI: It was way back in the late 1980s that I first encountered the expressions “artificial intelligence,” “pattern recognition” and “image processing.” I was completing the final semester of my undergrad college studies, while also writing up my last story for the campus magazine of the Indian Institute of Technology at Kharagpur.

Never having come across these technical terms during the four years I majored in instrumentation engineering, I was surprised to discover that the smartest professors and the brightest postgrad students of the electronics and computer science and engineering departments of my own college were neck-deep in research and development work involving AI technologies. All while I was blissfully preoccupied with the latest Madonna and Billy Joel music videos and Time magazine stories about glasnost and perestroika.




Now that the genie is out, the question is whether or not Big Tech is willing or even able to address the issues raised by the runaway growth of AI. (Supplied)

More than three decades on, William Faulkner’s oft-quoted saying, “the past is never dead. It is not even past,” rings resoundingly true to me, albeit for reasons more mundane than sublime. Terms I seldom bumped into as a newspaperman and editor since leaving campus — “artificial intelligence,” “machine learning” and “robotics” — have sneaked back into my life, this time not as semantic curiosities but as man-made creations for good or ill, with the power to make me redundant.

Indeed, an entire cottage industry that did not exist just six months ago has sprung up to both feed and whet a ravenous global public appetite for information on, and insights into, ChatGPT and other AI-powered web tools.




Teachers are seen behind a laptop during a workshop on ChatGpt bot organized by the School Media Service (SEM) of the Public education of the Swiss canton of Geneva on February 1, 2023. (AFP)

The initial questions about what kind of jobs would be created and how many professions would be affected, have given way to far more profound discussions. Can conventional religions survive the challenges that will spring from artificial intelligence in due course? Will humans ever need to wrack their brains to write fiction, compose music or paint masterpieces? How long will it take before a definitive cure for cancer is found? Can public services and government functions be performed by vastly more efficient and cheaper chatbots in the future?

Even until October last year, few of us employed outside of the arcane world of AI could have anticipated an explosion of existential questions of this magnitude in our lifetime. The speed with which they have moved from the fringes of public discourse to center stage is at once a reflection of the severely disruptive nature of the developments and their potentially unsettling impact on the future of civilization. Like it or not, we are all engineers and philosophers now.




Attendees watch a demonstration on artificial intelligence during the LEAP Conference in Riyadh last February. (Supplied)

By most accounts, as yet no jobs have been eliminated and no collapse of the post-Impressionist art market has occurred as a result of the adoption of AI-powered web tools, but if the past (as well as Ernest Hemingway’s famous phrase) is any guide, change will happen at first “gradually, then suddenly.”

In any event, the world of work has been evolving almost imperceptibly but steadily since automation disrupted the settled rhythms of manufacturing and service industries that were essentially byproducts of the First Industrial Revolution.

For people of my age group, a visit to a bank today bears little resemblance to one undertaken in the 1980s and 1990s, when withdrawing cash meant standing in an orderly line first for a metal token, then waiting patiently in a different queue to receive a wad of hand-counted currency notes, each process involving the signing of multiple counterfoils and the spending of precious hours.

Although the level of efficiency likely varied from country to country, the workflow required to dispense cash to bank customers before the advent of automated teller machines was more or less the same.

Similarly, a visit to a supermarket in any modern city these days feels rather different from the experience of the late 1990s. The row upon row of checkout staff have all but disappeared, leaving behind a lean-and-mean mix with the balance tilted decidedly in favor of self-service lanes equipped with bar-code scanners, contactless credit-card readers and thermal receipt printers.

Whatever one may call these endangered jobs in retrospect, minimum-wage drudgery or decent livelihood, society seems to have accepted that there is no turning the clock back on technological advances whose benefits outweigh the costs, at least from the point of view of business owners and shareholders of banks and supermarket chains.

Likewise, with the rise of generative AI (GenAI) a new world order (or disorder) is bound to emerge, perhaps sooner rather than later, but of what kind, only time will tell.




Just 4 months since ChatGPT was launched, Open AI's conversational chat bot is now facing at least two complaints before a regulatory body in France on the use of personal data. (AFP)

In theory, ChatGPT could tell too. To this end, many a publication, including Arab News, has carried interviews with the chatbot, hoping to get the truth from the machine’s mouth, so to say, instead of relying on the thoughts and prescience of mere humans.

But the trouble with ChatGPT is that the answers it punches out depend on the “prompts” or questions it is asked. The answers will also vary with every update of its training data and the lessons it draws from these data sets’ internal patterns and relationships. Put simply, what ChatGPT or GPT-4 says about its destructive powers today is unlikely to remain unchanged a few months from now.

Meanwhile, tantalizing though the tidbits have been, the occasional interview with the CEO of OpenAI, Sam Altman, or the CEO of Google, Sundar Pichai, has shed little light on the ramifications of rapid GenAI advances for humanity.




OpenAI CEO Sam Altman, left, and Microsoft CEO Satya Nadella. (AFP)

With multibillion-dollar investments at stake and competition for market share intensifying between Silicon Valley companies, these chief executives, as also Microsoft CEO Satya Nadella, can hardly be expected to objectively answer the many burning questions, starting with whether Big Tech ought to declare “a complete global moratorium on the development of AI.”

Unfortunately for a large swathe of humanity, the great debates of the day, featuring polymaths who can talk without fear or favor about a huge range of intellectual and political trends, are raging mostly out of reach behind strict paywalls of publications such as Bloomberg, Wall Street Journal, Financial Times, and Time.

An essay by Niall Ferguson, the pre-eminent historian of the ideas that define our time, published in Bloomberg on April 9, offers a peek into the deepest worries of philosophers and futurists, implying that the fears of large-scale job displacements and social upheavals are nothing compared to the extreme risks posed by galloping AI advancements.

“Most AI does things that offer benefits not threats to humanity … The debate we are having today is about a particular branch of AI: the large language models (LLMs) produced by organizations such as OpenAI, notably ChatGPT and its more powerful successor GPT-4,” Ferguson wrote before going on to unpack the downsides.

In sum, he said: “The more I read about GPT-4, the more I think we are talking here not about artificial intelligence … but inhuman intelligence, which we have designed and trained to sound convincingly like us. … How might AI off us? Not by producing (Arnold) Schwarzenegger-like killer androids (of the 1984 film “The Terminator”), but merely by using its power to mimic us in order to drive us insane and collectively into civil war.”

Intellectually ready or not, behemoths such as Microsoft, Google and Meta, together with not-so-well-known startups like Adept AI Labs, Anthropic, Cohere and Stable Diffusion API, have had greatness thrust upon them by virtue of having developed their own LLMs with the aid of advances in computational power and mathematical techniques that have made it possible to train AI on ever larger data sets than before.

Just like in Hindu mythology, where Shiva, as the Lord of Dance Nataraja, takes on the persona of a creator, protector and destroyer, in the real world tech giants and startups (answerable primarily to profit-seeking shareholders and venture capitalists) find themselves playing what many regard as the combined role of creator, protector and potential destroyer of human civilization.




Microsoft is the “exclusive” provider of cloud computing services to OpenAI, the developer of ChatGPT. (AFP file)

While it does seem that a science-fiction future is closer than ever before, no technology exists as of now to turn back time to 1992 and enable me to switch from instrumentation engineering to computer science instead of a vulnerable occupation like journalism. Jokes aside, it would be disingenuous of me to claim that I have not been pondering the “what-if” scenarios of late.

Not because I am terrified of being replaced by an AI-powered chatbot in the near future and compelled to sign up for retraining as a food-delivery driver. Journalists are certainly better psychologically prepared for such a drastic reversal of fortune than the bankers and property owners in Thailand who overnight had to learn to sell food on the footpaths of Bangkok to make a living in the aftermath of the 1997 Asian financial crisis.

The regret I have is more philosophical than material: We are living in a time when engineers who had been slogging away for years in the forgotten groves of academe and industry, pushing the boundaries of AI and machine learning one autocorrect code at a time, are finally getting their due as the true masters of the universe. It would have felt good to be one of them, no matter how relatively insignificant one’s individual contribution.

There is a vicarious thrill, though, in tracking the achievements of a man by the name of P. Sundarajan, who won admission to my alma mater to study metallurgical engineering one year after I graduated.




Google Inc. CEO Sundar Pichai (C) is applauded as he arrives to address students during a forum at The Indian Institute of Technology in Kharagpur, India, on January 5, 2017. (AFP file)

Now 50 years old, he has a big responsibility in shaping the GenAI landscape, although he probably had no inkling of what fate had in store for him when he was focused on his electronic materials project in the final year of his undergrad studies. That person is none other than Sundar Pichai, whose path to the office of Google CEO went via IIT Kharagpur, Stanford University and Wharton business school.

Now, just as in the final semester of my engineering studies, I have no illusions about the exceptionally high IQ required to be even a writer of code for sophisticated computer programs. In an age of increasing specialization, “horses for courses” is not only a rational approach, it is practically the only game in town.

I am perfectly content with the knowledge that in the pre-digital 1980s, well before the internet as we know it had even been created, I had got a glimpse of the distant exciting future while reporting on “artificial intelligence,” “pattern recognition” and “image processing.” Only now do I fully appreciate how great a privilege it was.

 


US media experts demand review of New York Times story on sexual violence by Hamas on Oct. 7

Updated 4 sec ago
Follow

US media experts demand review of New York Times story on sexual violence by Hamas on Oct. 7

  • 64 American journalism professionals sign letter accusing the newspaper of failing to do enough to investigate and confirm the evidence supporting the allegations in its story
  • It concerns a story headlined ‘Screams Without Words: Sexual Violence on Oct. 7’ that ran on the front page of the newspaper on Dec. 28
CHICAGO: Sixty-four American journalism professionals signed a letter sent to New York Times bosses expressing concern about a story published by the newspaper that accused Palestinians of sexual violence against Israeli civilians during the Oct. 7 attacks.
It concerns a story headlined “Screams Without Words: Sexual Violence on Oct. 7” that ran on the front page of the newspaper on Dec. 28 last year.
In the letter, addressed to Arthur G. Sulzberger, chairperson of The New York Times Co., and copied to executive editors Joseph Kahn and Philip Pan, the journalism professionals, who included Christians, Muslims and Jews, demanded an “external review” of the story.
It is one of several news reports by various media organizations that have been used by the Israeli government to counter criticisms of the brutal nature of its near-seven-month military response to the Hamas attacks, during which more than 34,000 Palestinians have been killed and most of the homes, businesses, schools, mosques, churches and hospitals in Gaza have been destroyed, displacing more than a million people, many of whom now face famine.
The letter, a copy of which was obtained by Arab News, states that “The Times’ editorial leadership … remains silent on important and troubling questions raised about its reporting and editorial processes.”
It continues: “We believe this inaction is not only harming The Times itself, it also actively endangers journalists, including American reporters working in conflict zones, as well as Palestinian journalists (of which, the Committee to Protect Journalists reports, around 100 have been killed in this conflict so far).”
Shahan Mufti, a journalism professor at the University of Richmond, a former war correspondent and one of the organizers of the letter, told Arab News that The New York Times failed to do enough to investigate and confirm the evidence supporting the allegations in its story.
“The problem is the New York Times is no longer responding to criticism and is no longer admitting when it is making mistakes,” he said. The newspaper is one of most influential publications in the US, he noted, and its stories are republished by smaller newspapers across the country.
This week, the Israeli government released a documentary, produced by pro-Israel activist Sheryl Sandberg, called “Screams Before Silence,” which it said “reveals the horrendous sexual violence inflicted by Hamas on Oct. 7.” It includes interviews with “survivors from the Nova Festival and Israeli communities, sharing their harrowing stories” and “never-before-heard eyewitness accounts from released hostages, survivors and first responders.”
In promotional materials distributed by Israeli consulates in the US, the producers of the documentary said: “During the attacks at the Nova Music Festival and other Israeli towns, women and girls suffered rape, assault and mutilation. Released hostages have revealed that Israeli captives in Gaza have also been sexually assaulted.”
Critics have accused mainstream media organizations of repeating unverified allegations made by the Israeli government and pro-Israel activists about sexual violence on Oct. 7, with some alleging it is a deliberate attempt to fuel anti-Palestinian sentiment in the US and help justify Israel’s military response.
Some suggest such stories have empowered police and security officials in several parts of the US to crack down on pro-Palestinian demonstrations, denouncing the protesters as “antisemitic” even though some of them are Jewish.
New York Mayor Eric Adams, for example, asserted, without offering evidence, that recent protests by students on college campuses against the war in Gaza had been “orchestrated” by “outside agitators.”
Israeli Prime Minister Benjamin Netanyahu has said the protests against his country’s military campaign in Gaza are antisemitic in nature.
Jeff Cohen, a retired associate professor of journalism at Roy H. Park School of Communications at Ithaca College, told Arab News The New York Times story was “flawed” but has had “a major impact in generating support for Israeli vengeance” in Gaza.
He continued: “Israeli vengeance has claimed the lives of tens of thousands of civilians. That’s why so many professors of journalism and media are calling for an independent investigation of what went wrong.
“That (New York Times) story, along with other dubious or exaggerated news reports — such as the fable about Hamas ‘beheading babies’ that President Biden promoted — have inflamed war fever.”
Cohen said the US media “too often … have promoted fables aimed at inflaming war fever,” citing as an example reports in 1990 that Iraqi soldiers had removed babies from incubators after their invasion of Kuwait. The assertions helped frame anti-Iraqi public opinion but years later they were proved to be “a hoax,” he added.
“On Oct. 7, Hamas committed horrible atrocities against civilians and it is still holding civilian hostages,” Cohen said. “Journalists must tell the truth about that, without minimizing or exaggerating, as they must tell the truth about the far more horrible Israeli crimes against Palestinian civilians.
“The problem is that the mainstream US news media have a long-standing pro-Israel bias. That bias has been proven in study after study. Further proof came from a recently leaked New York Times internal memo of words that its reporters were instructed to avoid — words like ‘Palestine’ (‘except in very rare cases’), ‘occupied territories’ (say ‘Gaza, the West Bank, etc.’) and ‘refugee camps’ (‘refer to them as neighborhoods, or areas’).”
Mufti, the University of Richmond journalism professor, said belligerents “on both sides” are trying to spin and spread their messages. But he accused Israeli authorities in particular of manipulating and censoring media coverage, including through the targeted killing of independent journalists, among them Palestinians and Arabs, and said this was having the greatest impact among the American public.
“Broadly speaking, a lot of the Western news media, and most of the world news media, do not have access to the reality in Gaza,” he said. “They don’t know. It is all guesswork.
“They are all reporting from Tel Aviv, they are reporting from Hebron, they are reporting from the West Bank. Nobody actually knows what the war looks like. It is all secondhand information.
“Most of the information is coming through the Israeli authorities, government and military. So, of course, the information that is coming out about this war is all filtered through the lens of Israel, and the military and the government.”
Mufti said the story published by The New York Times “probably changed the course, or at least influenced the course, of the war.”
He said it appeared at a time when US President Joe Biden was pushing to end the Israeli military campaign in Gaza “and it entirely changed the conversation. It was a very consequential story. And it so happens it was rushed out and it had holes in it … and it changed the course of the war.”
Mohammed Bazzi, an associate professor with the Arthur L. Carter Journalism Institute at New York University, told Arab News the letter demanding an “external review” of the story is “a simple ask.”
He added: “This story, and others as well, did play a role” in allowing the Israeli military to take action beyond acceptable military practices “and dehumanize Palestinians.” Such dehumanization was on display before Oct. 7, Bazzi said.
“In the Western media there seemed to be far less sympathetic coverage of Palestinians in Israel’s war in Gaza as a consequence of these stories,” he continued.
“We have seen much less profiles of Palestinians … we are beyond 34,000 Palestinians killed but we don’t have a true number or the true scale of the destruction in Gaza — there could be thousands more dead under the rubble and thousands more who will die through famine and malnutrition. This will not stop, as a consequence of what Israel has done.”
Bazzi said the Western media has contributed to the dehumanization of Palestinians more than any other section of the international media, while at the same time humanizing the Israeli victims.
“The New York Times has a great influence on the US media as a whole and sets a standard” for stories and narratives that other media follow, which is “more pro-Israel and less sympathetic to Palestinians,” he added.
Bazzi, among others, said The New York Times has addressed “only a handful of many questions” about its story and needs to do more to present a more accurate account of what happened on Oct. 7.
The letter to New York Times bosses states: “Some of the most troubling questions hovering over the (Dec. 28) story relate to the freelancers who reported a great deal of it, especially Anat Schwartz, who appears to have had no prior daily news-reporting experience before her bylines in The Times.”
Schwartz is described as an Israeli “filmmaker and former air force intelligence official.”
Adam Sella, another apparently inexperienced freelancer who shared the byline on the story, is reportedly the nephew of Schwartz’s partner. The only New York Times staff reporter with a byline on the story was Jeffrey Gettleman.
Media scrutiny of the story revealed that “Schwartz and Sella did the vast majority of the ground reporting, while Gettleman focused on the framing and writing,” according to the letter.
The New York Times did not immediately respond to requests by Arab News for comment.

Creative tech agency Engage Works to launch in Saudi Arabia

Updated 02 May 2024
Follow

Creative tech agency Engage Works to launch in Saudi Arabia

  • Representation at the Saudi Entertainment and Amusement Expo 2024

DUBAI: Creative technology agency Engage Works has announced its expansion into Saudi Arabia with the acquisition of a new trade license in the Kingdom.

Steve Blyth, founder and group CEO of the agency, told Arab News: “Saudi Arabia feels like the center of the universe right now for the creation of cultural destinations and immersive experiences.

“We get to work on projects that probably wouldn’t happen anywhere else in the world right now. The wealth of untapped cultural assets the Kingdom wants to bring to life — for new, young and international audiences — is unsurpassed.”

The agency will be represented at the Saudi Entertainment and Amusement Expo 2024, which takes place at the Riyadh Front Exhibition and Conference Center from May 7-9.

Alex McCuaig, Engage Work’s strategy director, said: “This is a great opportunity for us to showcase our expertise in creating immersive experiences and to collaborate with other industry leaders to drive innovation and engagement in the region.”

The agency has already won several projects in the Kingdom and will be opening an office in the country in the coming months, he added.

Engage Works currently has premises in London and Dubai, and its clients include Emirates, Accenture, Google, KPMG, Microsoft, and EY.


TikTok announces new safety measures

Updated 02 May 2024
Follow

TikTok announces new safety measures

  • Features aimed at enhancing safer content creation and sharing

DUBAI: TikTok has announced a slew of safety updates to enhance content creation and sharing on the platform.

The company said the features were designed to provide better transparency and help creators learn about its policies and check their account status.

Adam Presser, head of operations, said: “Creators play a fundamental role in helping maintain a safe and entertaining environment for everyone on TikTok.

“We focus on empowering people with information about our policies and tools so they can safely express themselves and connect with others.”

Effective this month, TikTok’s community guidelines have been updated to include refined definitions and more detailed explanations of the platform’s policies, such as those concerning hate speech and health misinformation.

They also feature expanded guidelines on the moderation of features such as Search, Live and the For You feed.

The platform is revising its eligibility standards for the feed. For example, accounts that repeatedly post content that goes against the standards for the feed might become temporarily ineligible for recommendation, making their content harder to find in searches.

The creators behind these accounts will be notified and be able to appeal the decision.

In order to help people better understand its policies TikTok will issue a warning when a creator violates community guidelines for the first time. This will not count toward the account’s strike tally.

The platform will notify creators of any violations and provide details about which rules they have breached and allow them to appeal the decision if needed.

However, policies that are considered zero tolerance, such as incitement to violence, are not eligible for such reminders and accounts violating them will be banned immediately.

Building on the account status page introduced last year, TikTok is launching an account check tool that will allow creators to review their last 30 posts and account status in one place.

It will also roll out a creator code of conduct in the coming weeks, which sets expectations for creators involved in programs, features, events and campaigns to follow both on and off-platform.

Presser said the standards were being introduced because the company “believes that being a part of these programs is an opportunity that comes with additional responsibilities.”

“This code will also help provide creators with additional reassurance that other participants are meeting these standards too,” he said.


Media watchdog says journalists should be allowed to cover college protests safely

Updated 02 May 2024
Follow

Media watchdog says journalists should be allowed to cover college protests safely

  • Journalists said they have been barred from reporting on events

LONDON: Media watchdog Committee to Protect Journalists has called on authorities to allow journalists covering US college protests to do so “freely and safely.”

“Journalists — including student journalists who have been thrust into a national spotlight to cover stories in their communities — must be allowed to cover campus protests without fearing for their safety,” said Katherine Jacobsen, the CPJ’s US, Canada and Caribbean program coordinator.

“Any efforts by authorities to stop them doing their jobs have far-reaching repercussions on the public’s ability to be informed about current events.”

Tensions have escalated between pro-Palestinian demonstrators and law enforcement during recent protests at universities across the US.

On Tuesday night, New York police equipped with anti-riot gear forcibly entered Columbia University’s Hamilton Hall, a focal point of the protests, resulting in the arrest of approximately 300 pro-Palestinian students.

Meanwhile, student journalists at the University of California in Los Angeles reported being assaulted and exposed to gas during violent clashes. In Northern California, local journalists covering college demonstrations were detained and arrested by police.

The CPJ said at least 13 journalists had been arrested or detained since the start of the Israel-Hamas war on Oct. 7 and 11 have been assaulted while covering related protests in the US. 

Those arrested include FOX 7 reporter Carlos Sanchez, who was shoved to the ground last month while covering a protest at the University of Texas in Austin. He is currently facing two misdemeanor charges.


Universal Music Group artists to return to TikTok after new licensing pact

Updated 02 May 2024
Follow

Universal Music Group artists to return to TikTok after new licensing pact

  • New deal to restore label’s song to platform, increase artists’ protection from AI
  • Universal Music says TikTok accounts for 1 percent of its annual revenue in 2023

LONDON: Universal Music Group and TikTok said on Thursday they had reached a new licensing agreement that will restore the label’s songs and artists to the social media platform as well as give musicians more protections from artificial intelligence.
TikTok began removing Universal’s content from its app after their licensing deal expired in January and the two sides failed to reach agreement on royalties, AI and online safety for TikTok’s users.
Describing their new pact as a multi-dimensional deal, the companies said they were working “expeditiously” to return music by the label’s artists to TikTok, and also said they would team up to realize new monetization opportunities from TikTok’s growing e-commerce capabilities.
They will “work together on campaigns supporting UMG’s artists across genres and territories globally,” the two firms said in a joint statement.
The short video app is a valuable marketing and promotional tool for the music industry. TikTok is where 16- to 19-year-olds in the United States most commonly discover music, ahead of YouTube and music streaming services such as Spotify , according to Midia Research.
“Roughly a quarter of US consumers say they listen to songs they have heard on TikTok,” said Tatiana Cirisano, Midia’s senior music industry analyst.
However, Universal Music claimed its artists and songwriters are paid just a fraction of what it receives from other major social media platforms.
The music label says TikTok accounts for 1 percent of its annual revenue or about $110 million in 2023. YouTube, by contrast, paid the music industry $1.8 billion from user-generated content in the 12 months ending in June 2022, according to Midia.
In a move that may well have eroded its bargaining power, Taylor Swift, one of Universal Music’s biggest acts, allowed a selection of her songs to return to TikTok as she promoted her latest album, “The Tortured Poets Department.”
Swift owns the copyrights to her recordings through her 2018 deal with Universal and can control where her songs are available, according to the Financial Times.
As licensing negotiations resumed in recent weeks, AI remained a major point of contention. Universal has claimed TikTok is “flooded” with AI-generated recordings, including songs that users create with the help of TikTok’s AI songwriting tools.
In Thursday’s deal, TikTok and Universal said that they would work together to ensure AI development across the music industry will protect human artistry and the economics that flow to those artists and songwriters.
“TikTok is also committed to working with UMG to remove unauthorized AI-generated music from the platform, as well as (developing) tools to improve artist and songwriter attribution,” the statement said.
Concerns about AI have grown in the creative community. In April, a non-profit group called the Artist Rights Alliance published an open letter urging the responsible use of the technology. The group of more than 200 musicians and songwriters called on technology companies and digital music services to pledge not to deploy AI in a way that would “undermine or replace the human artistry of songwriters and artists or deny us fair compensation for our work.”
The deal comes amid questions over TikTok’s long-term future in the United States. President Joe Biden signed legislation last week that gives TikTok’s Chinese owner, ByteDance, 270 days to sell its US assets. TikTok has vowed to file suit to challenge the legislation, which it calls a ban.
More than 170 million Americans use its video service, according to TikTok. Globally, it has more than 1.5 billion monthly active users, according to research firm Statista.