Apple heads into annual showcase reeling from AI missteps, tech upheaval and Trump’s trade war

In 2023, Apple unveiled a mixed-reality headset that has been little more than a niche product, and last year WWDC trumpeted its first major foray into the AI craze with an array of new features highlighted by the promise of a smarter and more versatile version of its virtual assistant, Siri. (AFP)
Short Url
Updated 09 June 2025
Follow

Apple heads into annual showcase reeling from AI missteps, tech upheaval and Trump’s trade war

  • The pre-summer rite is expected be more subdued than the feverish anticipation that surrounded the event in 2023 when Apple unveiled a mixed-reality headset
  • Now Apple is facing nagging questions about its ability to innovate and ability to navigate a gauntlet of other challenges as it heads into this year’s World Wide Developers Conference

CUP: After stumbling out of the starting gate in Big Tech’s pivotal race to capitalize on artificial intelligence, Apple will try to regain its footing Monday at its annual Worldwide Developers Conference.
The pre-summer rite, which attracts thousands of developers to Apple’s Silicon Valley headquarters, is expected to be more subdued than the feverish anticipation that surrounded the event during the previous two years.
In 2023, Apple unveiled a mixed-reality headset that has been little more than a niche product, and last year WWDC trumpeted its first major foray into the AI craze with an array of new features highlighted by the promise of a smarter and more versatile version of its virtual assistant, Siri.
But heading into this year’s showcase, Apple faces nagging questions about whether the nearly 50-year-old company has lost some of the mystique and innovative drive that turned it into a tech trendsetter. Instead of making a big splash as it did with the Vision Pro headset, Apple this year is expected to focus on an overhaul of its software that may include a new, more tactile look for the iPhone’s native apps and a new nomenclature for identifying its operating system updates.
Even though it might look like Apple is becoming a technological laggard, Forrester Research analyst Thomas Husson contends the company still has ample time to catch up in an AI race that’s “more of a marathon, than a sprint. It will force Apple to evolve its operating systems.”
If reports about its iOS naming scheme pan out, Apple will switch to a method that automakers have used to telegraph their latest car models by linking them to the year after they first arrive at dealerships. That would mean the next version of the iPhone operating system due out this autumn will be known as iOS 26 instead of iOS 19 — as it would be under the current sequential naming approach.
Whatever it’s named, the next iOS will likely be released as a free update in September, around the same time as the next iPhone models if Apple follows its usual road map.
Meanwhile, Apple’s references to AI may be less frequent than last year when the technology was the main attraction.
While some of the new AI tricks compatible with the latest iPhones began rolling out late last year as part of free software updates, Apple still hasn’t been able to soup up Siri in the ways that it touted at last year’s conference. The delays became so glaring that a chastened Apple retreated from promoting Siri in its AI marketing campaigns earlier this year.
“It’s just taking a bit longer than we thought,” Apple CEO Tim Cook told analysts last month when asked about the company’s headaches with Siri. “But we are making progress, and we’re extremely excited to get the more personal Siri features out there.”
While Apple has been struggling to make AI that meets its standards, the gap separating it from other tech powerhouses is widening. Google keeps packing more AI into its Pixel smartphone lineup while introducing more of the technology into its search engine to dramatically change the way it works. Samsung, Apple’s biggest smartphone rival, is also leaning heavily into AI. Meanwhile, ChatGPT recently struck a deal that will bring former Apple design guru Jony Ive into the fold to work on a new device expected to compete against the iPhone.
“While much of WWDC will be about what the next great thing is for the iPhone, the unspoken question is: What’s the next great thing after the iPhone?” said Dipanjan Chatterjee, another analyst for Forrester Research.
Besides facing innovation challenges, Apple also faces regulatory threats that could siphon away billions of dollars in revenue that help finance its research and development. A federal judge is currently weighing whether proposed countermeasures to Google’s illegal monopoly in search should include a ban on long-running deals worth $20 billion annually to Apple while another federal judge recently banned the company from collecting commission on in-app transactions processed outside its once-exclusive payment system.
On top of all that, Apple has been caught in the cross-hairs of President Donald Trump’s trade war with China, a key manufacturing hub for the Cupertino, California, company. Cook successfully persuaded Trump to exempt the iPhone from tariffs during the president’s first administration, but he has had less success during Trump’s second term, which seems more determined to prod Apple to make its products in the US.
“The trade war and uncertainty linked to the tariff policy is of much more concern today for Apple’s business than the perception that Apple is lagging behind on AI innovation,” Husson said.
The multi-dimensional gauntlet facing Apple is spooking investors, causing the company’s stock price to plunge by nearly 20 percent so far this year — a decline that has erased $750 billion in shareholder wealth. After beginning the year as the most valuable company in the world, Apple now ranks third behind long-time rival Microsoft, another AI leader, and AI chipmaker Nvidia.


Keep it real: Tech giants urged to lead on safeguarding online privacy

Updated 10 December 2025
Follow

Keep it real: Tech giants urged to lead on safeguarding online privacy

  • AI, deepfakes, misinformation under scrutiny at Bridge Summit
  • Media, tech professionals discuss how to keep users safe

ABU DHABI: As AI-generated deepfakes and bots grow more sophisticated, online privacy and identity protection have become urgent global concerns, especially for journalists, influencers and media professionals, whose lives unfold in the digital spotlight.

The growing threats of impersonation, character assassination and coordinated online abuse was at the center of a high-stakes conversation on the second day of the Bridge Summit in Abu Dhabi, where regional and international leaders from the technology and media fields tackled the complex risks surrounding digital safety, security and trust in an AI-powered world.

Adeline Hulin, chief of unit, media and information literacy at UNESCO, highlighted the risks that many people, in particular children and women, are facing online. 

Although her work has long centered on promoting safe internet practices, she said that the onus of safeguarding online privacy and security rested primarily with technology companies — the only actors, she argued, capable of keeping pace with the rapid evolution of AI.

“It is going to be really important that instead of people constantly having to adapt to the technology, if the technology itself is more user-centric,” she told the summit.

“We can train people to recognize deepfakes, but technology can do that quicker.”

Major tech companies have come under fire in recent years for failing to tackle harassment and misinformation. This has led to a litany of legislation as governments try to gain control of a growing problem.

But some companies appear to be heeding the call. Erin Relford, senior privacy engineer at Google, said her company was working to embed privacy protections in the infrastructure level beneath the platform.

“We want to give consumers the choice of how much they can share data-wise,” she said.

“The biggest challenge is making sure you have the right people in the room to create these privacy protection platforms.”

Privacy enhancement technology would see several tools released that empowered users to understand how their data was being monetized and aggregated, Relford said.

Google had been working to change the parental controls and make it easier for users to understand their protection, she said, but admitted it was still difficult and more education was needed.

“Most of the power lies within the user. Consumers drive what is popular. In terms of organizations that protect your privacy, we want to encourage them and use their services rather than empowering websites that don’t,” she said.

Education is key 

Still, Relford argued that education was fundamental in rolling out privacy tools. Tech companies could only do so much if people did not increase their awareness online, she said.

“The better we educate people about privacy tools, the less harm we have from the ground up.”

Echoing similar sentiments, Hulin promoted the idea of including online literacy in school curricula. Even high-profile moves, like Australia’s recent headline-grabbing ban on under-16s using social media, would do little to reduce the risks without more education.

“Even if there is banning, it’s not going to change misinformation and disinformation. You still need to teach these kids about the information ecosystem,” she said.

“Parents need to be really interested in the news information that your children are consuming.”

Assel Mussagaliyeva-Tang, founder of Singapore-based startup EDUTech Future, said that the AI revolution demanded close collaboration between schools, universities and families to equip children with the skills to navigate new technologies safely and responsibly.

“We need to set up the guardrails and protection of the kids because they are not aware how the model will respond to their needs,” she said.

A UNESCO survey found that 62 percent of digital creators skip rigorous fact-checking, while a 2024 YouGov study showed only 27 percent of young adults feel confident about AI in education.

Mussagaliyeva-Tang said educators needed to focus on preparing and nurturing adults who were “ready for the world,” by integrating ethics, data literacy and critical thinking into curricula.

But she said that universities and the broader education system remained behind the curve in adapting to emerging technologies and equipping students with the skills needed for responsible digital engagement.

Likewise, tech companies needed to be transparent and inclusive in training their data in a way that represented different cultures, she said.

While global regulations on AI remain fragmented, Dr. Luca Iando, dean and distinguished chair at the Collins College of Professional Studies at St. John’s University, called on educational institutions to actively collaborate with technology platforms to help shape educational content and mitigate the potential harm of AI on children, especially as technologies continue to grow.

He warned of young people’s overreliance on AI and said that educators in the long term needed to focus on developing “durable, human skills” in students and transform the type of assignments and coursework to meet the new age of AI.

There needed to be guidelines for students on using AI responsibly, to prepare them for the workplace, he said.

Highlighting the skills gap between educational institutions and the modern workplace, Mussagaliyeva-Tang said: “Employers want professionals. They don’t have time and the budgets to retrain after the outdated curriculum of the university.”

The rise of AI demanded a rethinking of the true purpose of education to nurture individuals who strove to make a positive impact on a rapidly evolving world, she said.