CEPS AI

Artificial Intelligence, Governance, and Society

 

CEPS AI is a research initiative devoted to the study of the economic, institutional, and societal implications of artificial intelligence and algorithmic systems. Building on the strengths of CEPS in economic theory and public policy, the initiative examines how AI transforms markets, public decision-making, democratic processes, and the circulation of information.

CEPS AI focuses on how artificial intelligence reshapes economic interactions, information environments, and collective decision-making in modern economies and societies. Particular attention is paid to governance, regulation, and the design of institutions and public policies that ensure responsible, transparent, and socially beneficial uses of algorithmic systems.

By bringing together researchers at the intersection of economics, public policy, data science, and political science, CEPS AI provides a platform for interdisciplinary research and dialogue on the challenges raised by AI in contemporary societies.

At CEPS, Artificial Intelligence Serves as a Lens for Economic and Social Analysis

AI & Democratic Information

Thomas Renault — Professeur at RITM and CEPS — www.thomas-renault.com

How does the use of large language models in the public sphere transform the production, dissemination, and evaluation of information, and with what consequences for democratic debate?

Thomas Renault studies the use of large language models (LLMs) on social media—more specifically on X, to analyze the growing role of conversational AI systems as information intermediaries. Drawing on original empirical data, this work examines how users employ LLMs for fact-checking, the types of questions they pose, who initiates these fact-checks, and who is being fact-checked. It also analyzes the degree of agreement between different LLMs, between LLMs and evaluations from the crowd-based Community Notes program, and between LLMs and professional fact-checkers. The findings highlight key challenges for democracy related to trust, transparency, and the concentration of informational power in the hands of a small number of technological actors, contributing to broader reflections on AI governance and the institutional conditions required to preserve a pluralistic and reliable information environment.

References
  1. Renault, T., Mosleh, M., & Rand, D. G. (2025). @Grok Is This True? LLM-Powered Fact-Checking on Social Media. Working Paper.
  2. Renault, T., Mosleh, M., & Rand, D. G. (2025). Republicans are flagged more often than Democrats for sharing misinformation on X’s Community Notes. PNAS, 122(25), e2502053122.

AI & Financial Sustainability

 

Samuel Ligonnière — Assistant Professeur at CEPS. 

How can artificial intelligence help assess the sustainability and resilience of firms and financial systems in the face of environmental and economic risks?

Work led by Samuel Ligonnière examines how AI can aggregate climate-related information disclosed by firms, assess its reliability, and identify inconsistencies across sources, as well as how environmental news is priced in financial markets. This research sheds light on the mechanisms underlying green finance and investor responses to climate-related risks. Related work applies AI tools to the study of corporate distress and insolvency in France, analyzing the determinants of firm failures, the allocation of firms across restructuring and bankruptcy procedures, and their economic consequences across sectors and territories. Together, these projects contribute to a broader understanding of financial sustainability, market resilience, and the role of AI in informing economic policy.

Reference

Bennani, Ligonnière (2025). The Dynamic Effect of Climate News on Financial Markets: Evidence from France. HAL Working Paper.


AI & Meritocratic Incentives

Ugo Bolleta — Assistant Professeur at RITM & CEP

The integration of AI into economic activity raises fundamental questions about how rewards should be distributed in society.

Ugo Bolletta studies theoretically the interaction between AI and the distribution of economic rewards. Meritocratic societies are founded on the principle that rewards should reflect individual effort and moral desert. The diffusion of AI into productive activities may disrupt this link by altering how effort translates into observable outcomes. In a canonical principal–agent framework, AI is modeled as a source of uncertainty in firms’ beliefs about the relationship between effort and performance. The mere possibility that agents use AI to facilitate their work leads firms to adjust optimal wage contracts downward. This research highlights a tension between the potential of AI to improve overall efficiency and its capacity to exacerbate economic inequality, calling for a re-examination of incentive schemes governing work, compensation, and resource allocation.

Reference

work in progress


AI & Systemic Information Risk

Olivier Bos — Professeur at CEPS

Stefano Bosi — Professeur at CEPS

When AI systems both shape information flows and retrain on social data, small informational distortions can become self-reinforcing, making upstream, network-aware regulation essential to preserve the stability of collective knowledge.

How does the interaction between artificial intelligence and social communication networks affect the stability of collective knowledge?

In joint work, Olivier Bos and Stefano Bosi develop a formal model in which AI-generated content diffuses through social networks while AI systems learn from the informational environment they influence. This interaction creates feedback loops that can amplify small informational distortions over time, even in the absence of manipulation or irrational behavior. The analysis shows that the long-run dynamics of complex AI–society systems collapse to a simple stability condition, identifying sharp thresholds beyond which misinformation becomes self-reinforcing. It highlights how network structures such as homophily or centralized information architectures increase fragility, and derives policy frontiers that characterize the minimum level of upstream filtering required to preserve informational stability. This work provides a theoretical foundation for risk-based and network-aware AI regulation, directly relevant to debates surrounding the EU AI Act and digital platform governance.

Reference

Bos O. and S. Bosi, AI Contagion in Social Networks, working paper


Contact & Information

Olivier Bos Professor ENS Paris-Saclay