Most of us now get our information using AI chatbots and search engines. Even Google shows us an AI summary first before guiding us towards the sources it compiled the answers from.

A new study from Yale suggests that while AI-generated answers are fast, convenient, and easy to read, they can also influence our opinions. Daniel Karell, an assistant professor of sociology at Yale, and his team wanted to find out whether reading AI-written summaries of historical events helped people learn better than reading human-written ones. 

To test this, participants were shown short summaries of historical events, some written by humans and others by AI tools like ChatGPT, and then quizzed on what they remembered.

The result? People who read AI-written summaries consistently answered more questions correctly.

Is AI just better at disseminating information than humans?

Karell attributes this to how AI presents information. “It’s like the model took Wikipedia and made it more readable,” he said. The AI summaries were smoother, clearer, and easier to retain, regardless of whether participants knew they were reading AI-generated content.

That means, even when people were told the summary was written by AI, they still learned more from it than from the human-written version.

Should this worry you?

Here is where it gets interesting. In a follow-up paper published in PNAS Nexus, the same researchers found that AI summaries not only teach better, but also influence political opinions.

If the AI summary had a liberal slant, readers came away with more liberal opinions. A conservative slant had the opposite effect. The researchers believe this happens because AI doesn’t just present facts, but it frames them in a way that feels more logical and convincing.

AI tools are becoming the default way people learn about history and current events. That is not necessarily bad. But knowing that the tool shaping what you learn can also quietly shape what you think is something worth keeping in mind.

At the same time, AI hallucinations remain a significant issue, and AI-generated summaries can be even more misleading for humans. A study conducted by researchers at USC’s Information Sciences Institute found that AI systems can execute propaganda campaigns with minimal human input.

If we add to this the idea that AI can be more convincing than humans, it’s scary to think how these tools can be used to manipulate human thinking and reasoning, guiding us toward a more fractured world.

Share.
Exit mobile version