Apple’s Image Playground app is said to have some bias issues. A machine learning scientist recently shared several outputs generated using the artificial intelligence (AI) app and claimed that it contained incorrect skin tone and hair texture on several occasions. These inaccuracies were also said to be paired with specific racial stereotypes, adding to the problem. It is difficult to state whether the alleged issue is a one-off incident or a widespread issue. Notably, the Cupertino-based tech giant first introduced the app as a part of Apple Intelligence suit with the iOS 18.2 update.
Apple’s Image Playground App Might Have Bias Issues
Jochem Gietema, the Machine Learning Science Lead at Onfido, shared a blog post, highlighting his experiences using Apple’s Image Playground app. In the post, he shared several sets of outputs generated using the Image Playground app and highlighted the instances of racial biases by the large language model powering the app. Notably, Gadgets 360 staff members did not notice any such biases while testing out the app.
“While experimenting, I noticed that the app altered my skin tone and hair depending on the prompt. Professions like investment banker vs. farmer produce images with very different skin tones. The same goes for skiing vs. basketball, streetwear vs. suit, and, most problematically, affluent vs. poor,” Gietema said in a LinkedIn post.
Alleged biased outputs generated using the Image Playground app
Photo Credit: Jochem Gietema
Such inaccuracies and biases are not unusual with LLMs, which are trained on large datasets which might contain similar stereotypes. Last year, Google’s Gemini AI model faced backlash for similar biases. However, companies are not completely helpless to prevent such generations and often implement various layers of security to prevent them.
Apple’s Image Playground app also comes with certain restrictions to prevent issues associated with AI-generated images. For instance, the Apple Intelligence app only supports cartoon and illustration styles to avoid instances of deepfakes. Additionally, the generated images are also generated with a narrow field of vision which usually only captures the face along with a small amount of additional details. This is also done to limit any such instances of biases and inaccuracies.
The tech giant also does not allow any prompts that contain negative words, names of celebrities or public figures, and more to limit users abusing the tool for unintended use cases. However, if the allegations are true, the iPhone maker will need to include additional layers of safety to ensure users do not feel discriminated against while using the app.