WASHINGTON: Apple on Wednesday apologized for its digital assistant Siri sharing some of what it heard with quality control workers as it unveiled new rules for handling data from conversations.
Under the changes, Apple will allow its employees to review conversations only from customers who opt into the “Siri grading” program to improve the voice recognition technology. Apple will also delete by default any recordings used for the program.
“We realize we haven’t been fully living up to our high ideals, and for that we apologize,” Apple said in a post.
“We’ve decided to make some changes to Siri” as a result of concerns expressed about the grading program, the company added. “Our goal with Siri, the pioneering intelligent assistant, is to provide the best experience for our customers while vigilantly protecting their privacy.”
Computer-generated transcripts will still be used to hone the ability of the software to understand what people say and mean, the company said.
Apple suspended the program after news broke that contractors were hearing confidential medical information, criminal dealings and even sexual encounters.
The California tech giant was among several firms scrutinized on using contractors to “listen” to conversations with digital assistants to improve the artificial intelligence software.
If customers opt in, only Apple employees will be allowed to listen to audio samples of Siri interactions and they will “work to delete any recording which is determined to be an inadvertent trigger” of the voice-commanded digital assistant, according to the company.
“We hope that many people will choose to help Siri get better, knowing that Apple respects their data and has strong privacy controls in place,” Apple said.
Google and Amazon have also announced changes to their programs in response to privacy concerns.
Apple apologizes for listening to Siri talk, sets new rules
Apple apologizes for listening to Siri talk, sets new rules
- Google and Amazon have also announced changes to their programs in response to privacy concerns
OpenAI’s Altman says world ‘urgently’ needs AI regulation
- Sam Altman, head of ChatGPT maker OpenAI, told a global artificial intelligence conference on Thursday that the world “urgently” needs to regulate the fast-evolving technology
NEW DELHI: Sam Altman, head of ChatGPT maker OpenAI, told a global artificial intelligence conference on Thursday that the world “urgently” needs to regulate the fast-evolving technology.
An organization could be set up to coordinate these efforts, similar to the International Atomic Energy Agency (IAEA), he said.
Altman is one of a host of top tech CEOs in New Delhi for the AI Impact Summit, the fourth annual global meeting on how to handle advanced computing power.
“Democratization of AI is the best way to ensure humanity flourishes,” he said on stage, adding that “centralization of this technology in one company or country could lead to ruin.”
“This is not to suggest that we won’t need any regulation or safeguards,” Altman said.
“We obviously do, urgently, like we have for other powerful technologies.”
Many researchers and campaigners believe stronger action is needed to combat emerging issues, ranging from job disruption to sexualized deepfakes and AI-enabled online scams.
“We expect the world may need something like the IAEA for international coordination of AI,” with the ability to “rapidly respond to changing circumstances,” Altman said.
“The next few years will test global society as this technology continues to improve at a rapid pace. We can choose to either empower people or concentrate power,” he added.
“Technology always disrupts jobs; we always find new and better things to do.”
Generative AI chatbot ChatGPT has 100 million weekly users in India, more than a third of whom are students, he said.
Earlier on Thursday, OpenAI announced with Indian IT giant Tata Consultancy Services (TCS) a plan to build data center infrastructure in the South Asian country.










