Twitter competition reveals bias in image-cropping algorithm

Rumman Chowdhury, Director, Twitter, Machine Learning, Ethics, Transparency and Accountability (META), wraps up the Twitter Algorithmic Bias Bug Bounty Challenge Results. (File/Youtube)
Short Url
Updated 12 August 2021
Follow

Twitter competition reveals bias in image-cropping algorithm

  • Competition winner revealed that algorithm prefers slimmer, younger, lighter-skinned faces

DUBAI: Twitter’s image-cropping algorithm prefers faces that are slimmer, younger, and have lighter skin, according to a researcher who took part in a bug bounty competition organized by the social networking company.

The program, started on July 30, invited researchers to hunt for discrepancies as part of Twitter’s first algorithmic bias bounty challenge held at the Defcon convention.

The project was led by Rumman Chowdhury, Twitter’s director of machine learning, ethics, transparency, and accountability for the Middle East, Turkey, and Africa (META), and Jutta Williams, the firm’s product manager for the same region.

In a blog, they said: “Finding bias in machine learning models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public.

“For this challenge, we are re-sharing our saliency model and the code used to generate a crop of an image given a predicted maximally salient point and asking participants to build their own assessment.”

The contest’s first prize of $3,500 went to Bogdan Kulynych, a Ph.D. student at the Swiss Federal Institute of Technology in Lausanne. His submission showed how algorithmic models amplify real-world biases and societal expectations of beauty.

Kulynych’s approach consisted of artificially generating faces with differing features and then running them through the algorithm. He found that the algorithm focused on younger, slimmer, and lighter faces over older, wider, or darker faces.




Some of the faces generated to test the algorithm. (File/GitHub)

After winning the competition, he highlighted on Twitter the “fast-paced” nature of the contest in comparison to academic publishing. Although he admitted that his submission “came with plenty of limitations that future analyses using the methodology should account for,” he also said it was a “good thing” because even if some submissions only hinted “at the possibility of the harm without rigorous proofs,” the approach would be able to detect such harms early on.

“We should not forget that algorithmic bias is only a part of a bigger picture. Addressing bias in general and in competitions like this should not end the conversation about the tech being harmful in other ways, or by design, or by fact of existing,” Kulynych added.

It is not the first time the biases of the image-cropping algorithm have come to light with several users pointing out the issue last year. At the time, Twitter said in a statement that its team had tested the algorithm prior to launching it but “did not find evidence of racial or gender bias.”

The company pointed out that it would, however, continue its analysis and open source it for others to “review and replicate.”

In a blog post in May, Chowdhury announced the results of the analysis that showed the algorithm favored women to men by 8 percent, white to black individuals by 4 percent, white to black women by 7 percent, and white to black men by 2 percent.

Based on the results, Twitter began testing and rolling out full images in the feed as well as a true preview before posting.

She said: “We’re working on further improvements to media on Twitter that builds on this initial effort, and we hope to roll it out to everyone soon.”

The competition was another step in identifying flaws in the algorithm from an outside perspective.

In their blog, Chowdhury and Williams said: “We want to take this work a step further by inviting and incentivizing the community to help identify potential harms of this algorithm beyond what we identified ourselves.”


Grok faces more scrutiny over deepfakes as Irish regulator opens EU privacy investigation

Updated 17 February 2026
Follow

Grok faces more scrutiny over deepfakes as Irish regulator opens EU privacy investigation

  • The regulator says Grok has created and shared sexualized images of real people, including children. Researchers say some examples appear to involve minors
  • X also faces other probes in Europe over illegal content and user safety

LONDON: Elon Musk’s social media platform X faces a European Union privacy investigation after its Grok AI chatbot started spitting out nonconsensual deepfake images, Ireland’s data privacy regulator said Tuesday.
Ireland’s Data Protection Commission said it notified X on Monday that it was opening the inquiry under the 27-nation EU’s strict data privacy regulations, adding to the scrutiny X is facing in Europe and other parts of the world over Grok’s behavior.
Grok sparked a global backlash last month after it started granting requests from X users to undress people with its AI image generation and editing capabilities, including putting females in transparent bikinis or revealing clothing. Researchers said some images appeared to include children. The company later introduced some restrictions on Grok, though authorities in Europe weren’t satisfied.
The Irish watchdog said its investigation focuses on the apparent creation and posting on X of “potentially harmful” nonconsensual intimate or sexualized images containing or involving personal data from Europeans, including children.
X did not respond to a request for comment.
Grok was built by Musk’s artificial intelligence company xAI and is available through X, where its responses to user requests are publicly visible.
The watchdog said the investigation will seek to determine whether X complied with the EU data privacy rules known as GDPR, or the General Data Protection Regulation. Under the rules, the Irish regulator takes the lead on enforcing the bloc’s privacy rules because X’s European headquarters is in Dublin. Violations can result in hefty fines.
The regulator “has been engaging” with X since media reports started circulating weeks earlier about “the alleged ability of X users to prompt the @Grok account on X to generate sexualized images of real people, including children,” Deputy Commissioner Graham Doyle said in a press statement.
Spain’s government has ordered prosecutors to investigate X, Meta and TikTok for alleged crimes related to the creation and proliferation of AI-generated child sex abuse material on their platforms, Spanish Prime Minister Pedro Sánchez said on Tuesday.
“These platforms are attacking the mental health, dignity and rights of our sons and daughters,” Sánchez wrote on X.
Spain announced earlier this month that it was pursuing a ban on access to social media platforms for under-16s.
Earlier this month, French prosecutors raided X’s Paris offices and summoned Musk for questioning. Meanwhile, the data privacy and media regulators in Britain, which has left the EU, have opened their own investigations into X.
The platform is already facing a separate EU investigation from Brussels over whether it has been complying with the bloc’s digital rulebook for protecting social media users that requires platforms to curb the spread of illegal content such as child sexual abuse material.