Earlier this week, ChatGPT’s creator OpenAI revealed a couple of new reasoning models which, it claims, are capable of “thinking with images.” The o3 and the o4-mini models are characterized by powerful abilities to interpret and manipulate images and fetch any information to improve the model’s output. Simultaneously, the capable models are also being used to fuel fun side quests, including using ChatGPT to determine locations shown in photos, also known as geolocating, which, if not used responsibly, can turn into a privacy nightmare.
Following the models’ release, expert users realized their ability to identify locations in photos, with limited additional inputs. Out of the two models, o3 — the more advanced one — appears to be proficient at this skill, and we could already be witnessing the origins of yet another viral trend started by ChatGPT.
The models can edit images, including cropping or zooming into them, to extract information. Multiple examples demonstrate its ability to locate (presumably) any spot on the surface of the Earth even with obstructions, including people, to the location’s central attraction. The model appears to respond back with precise geographical coordinates along with the name of the place, and the trick seemingly also works with images of the indoors.
Wharton associate professor and X influencer Ethan Mollick confirmed the model is not simply pulling geotagged information from the photos, and instead does all the thinking by itself. Like any AI model, it is prone to incorrect responses, especially with limited cues, such as a single image. But even when it gets the location wrong in the first go, the model persistently tries to slice images until you confirm it has identified the right location, as demonstrated by X user Brett Cooper.
Although geolocating is a fun and playful activity, it has largely been limited to experts, who, we hope, use their super-abilities to guess the exact geographical location from a single photo responsibly. However, ChatGPT’s latest update makes the process effortless and easy for anyone with access to the newest models.
Beyond its doubts and abilities, the specific advancement poses a massive potential risk of being misused, especially without any barriers preventing unauthorized use to determine anyone’s location. With this, the journey from cyberstalking to stalking in the physical realm may take only a few minutes, and we hope OpenAI takes the right steps to address it.