Artificial intelligence and the future of human decision-making
A conversation with Francesco Marcelloni on AI, democracy, and why the biggest risks of artificial intelligence may not be the ones we talk about most.
Artificial intelligence is transforming healthcare, education, and governance. Dan Banik and Francesco Marcelloni explore the risks and benefits, and why human judgment must remain central in the AI era.
🎧 Listen to the full conversation
Estimated listening time: ~40 minutes
Artificial intelligence is now everywhere in public debate. It is discussed as a threat to democracy, a driver of disinformation, a tool for better healthcare, a support for education, and a force that could reshape labor markets. However, much of the discussion swings between extremes. Either AI will save us, or it will destroy us.
In reality, the picture is more complicated.
Some observers warn that AI is no longer merely a tool but increasingly something closer to an agent — a system capable of generating information, persuading people, and shaping public debate. As AI increasingly mediates the information we consume, it may amplify misinformation, weaken the shared factual baseline on which democratic societies depend, and reshape how citizens form opinions and make decisions.
But the story of artificial intelligence is not only about risks.
Across the world, AI is already being used in practical ways, such as helping doctors detect diseases earlier, assisting students with learning, and enabling governments to analyze vast quantities of information.
Key insights from this conversation
While AI is often framed as either a revolutionary breakthrough or an existential threat, the reality is far more complex.
Artificial intelligence can both produce misinformation and detect it, making it part of both the problem and the solution.
Europe risks falling behind in AI development due to technological dependence on US and Chinese technology.
In healthcare, AI may significantly improve diagnosis. However, trust requires explainable systems that doctors can understand.
Contrary to earlier waves of automation, medium- and high-skill analytical jobs may be especially exposed to AI disruption.
Recent research by Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar adds another dimension to this debate.
Their work suggests that while AI can improve short-term decision-making through highly accurate recommendations, heavy reliance on such systems may reduce incentives for humans to invest in learning and knowledge creation.
Because human effort produces not only private insights but also shared public knowledge, excessive dependence on AI could gradually weaken the collective knowledge base that supports innovation, scientific progress, and informed democratic debate.
In the conversation below, Francesco Marcelloni (Professor of Data Mining and Machine Learning at the University of Pisa and Academic Director of the Knowledge Hub on AI at Circle U.) and I explore a more balanced view of artificial intelligence.
AI must be understood through a double lens.
It brings risks. But it also offers enormous opportunities.
“We have to discuss AI from a double perspective. On the one hand, AI can help. On the other hand, we should be careful.”
— Francesco Marcelloni
The problem with the AI debate
The discussion begins with a reference to Yuval Noah Harari’s argument at Davos that AI is no longer simply a tool but something closer to an agent. Harari’s analogy is striking. A knife can be used by humans to cut salad or to kill, but humans remain in control of the knife.
With AI, the fear is different. The system may begin to act with a degree of autonomy, and human control may weaken. Marcelloni does not dismiss these concerns, but he resists alarmism. AI, he argues, is still a technology. And like all technologies, it has both benefits and drawbacks. The difference is that AI’s drawbacks can affect society and humanity at scale.
That is why it must be approached with caution. At the same time, it should not be treated as a demon.
But the debate about AI is not only about risk. It is also about power.
Who controls AI and whose needs it serves
A growing concern in policy circles is that the direction of AI development is being shaped by a small number of firms and elites (particularly in Silicon Valley). If a handful of companies determine what kinds of problems AI should solve, entire regions and populations may be left out.
Marcelloni broadens the discussion by turning to Europe. While AI is often framed as a contest between the United States and China, Europe itself is in a fragile position. Most of the large language models used by the public come from outside Europe. This creates not only technological dependence but also strategic vulnerability.
He also warns that public debate tends to reduce AI to large language models alone. That is a mistake. Artificial intelligence is far broader than generative systems. And in economic terms, more traditional forms of AI may ultimately matter more than the current wave of generative models.
“AI has this double role today.
It can help produce fake news, but it can also help detect it.”
Deepfakes and misinformation may be generated with AI. But AI-based tools can also identify them. In that sense, artificial intelligence is not simply a danger to be feared. It is also part of the toolkit for responding to the risks it creates.
Final thought
Artificial intelligence will shape everything from medical diagnosis to political communication. But the technology itself will not determine the outcome. What matters is how societies govern it, regulate it, and integrate it into institutions.
In other words, the future of AI is not simply a technological question. It is also a political, institutional, and human one.
Artificial intelligence is neither purely liberating nor purely dangerous. It amplifies human choices, human priorities, and human failures. It can centralize power or democratize access. It can deepen inequality or expand opportunity. It can weaken critical thinking or strengthen it.
The real question is not what AI can do. The real question is how we choose to use it.
If you enjoy conversations like this, please consider subscribing.
New essays and podcast episodes are delivered directly to your inbox.




