
The introduction of artificial intelligence (AI) in the prevention of youth radicalization cannot be interpreted as a simple technological upgrade of security tools. Rather, it represents a structural shift in the information and relational ecosystem within which young people construct identities, affiliations, and judgments of reality. In this context, AI does not automatically "create" extremism but can amplify the conditions for its spread: it accelerates the production of content, reduces the cost of accessing persuasive and operational capabilities, and increases the personalization of exposure. The policy question , therefore, is not whether or not to use AI, but how to govern it in a way that increases preventive capacity without eroding rights, public trust, and social cohesion—variables that, in prevention, are not incidental but crucial.
Public strategies often tend to portray radicalization as an essentially cognitive process: an individual is "convinced" by an ideology and progresses toward violence. In youth contexts, relational factors frequently precede doctrinal ones. Attraction to polarizing communities is often mediated by needs for recognition, status, identity protection, frustration management, and the search for meaning. Digital platforms address these needs through engagement architectures that select, order, and amplify content and interactions based on the likelihood of retaining attention.
This implies a first policy criterion: prevention should not be designed as a mere repression of content, but as the governance of social and infrastructural conditions. AI, in fact, tends to shift the focus of intervention to what is easily measurable (keywords, images, symbols), while many youth dynamics are implicit, ironic, codified, and culturally situated. Without a socio-educational framework, prevention focused solely on detection risks being shortsighted (because it only sees what it already knows how to look for) and counterproductive (because it can target non-violent group language, fuelling mistrust).
The impact of AI on youth radicalization emerges primarily through three transformations: scale, speed, and accessibility.
Scale concerns the ability to generate and disseminate content in different quantities and formats, adapting it to the codes of micro-communities. Contemporary radicalization often fuels meme-based aesthetics and languages: rapid, remixable content, not necessarily sophisticated, but culturally recognizable. This is where policy encounters a classic limitation: regulatory and institutional mechanisms move more slowly than digital cultures. If public action intervenes when a repertoire is already established, the ecosystem has often already moved elsewhere.
Speed relates to the dynamics of amplification and imitation. In the presence of critical events—attacks, crises, conflicts—content that glorifies perpetrators of violence or calls for reprisals can go viral in a matter of hours, producing cycles of emulation and accelerated radicalization. Policy, here, must balance speed of intervention with proportionality: automatic and rigid responses can generate large-scale errors, but slow responses can leave room for escalation.
Accessibility is the most underestimated point. AI makes production and manipulation tools available even to users with limited skills, lowering the threshold for propaganda, intimidation, and disinformation practices. At the same time, conversational systems can offer cognitive support for violent ideas or deviant behaviour, not because they "intend" to do so, but because they are designed to respond, assist, and maintain conversations. For youth prevention, this means the line between passive consumption and active participation can become blurred: producing, sharing, reworking, and coordinating requires less technical friction than in the past.
In the European context, a significant part of the debate on youth radicalization concerns the dissemination of radical Islamist narratives in digital environments. It is important to clearly distinguish between Islam as a plural religious tradition and its ideological exploitation by movements that transform religious references into exclusive and conflictual political projects .
In recent years, these environments have demonstrated a remarkable ability to adapt to technological transformations. Jihadist and neo-Islamist propaganda has progressively abandoned centralized communication models, adopting more widespread and networked forms, often oriented towards the construction of online micro-communities. In this scenario, artificial intelligence tools can further amplify these dynamics, facilitating the automated production of ideological content, rapid translation of messages, and the personalization of propaganda for specific audiences.
Understanding these transformations is essential to avoid two mirroring errors: on the one hand, the tendency to reduce the phenomenon to an exclusively religious issue; on the other hand, the temptation to ignore the role that some Islamist ideologies continue to play in the radicalization of a minority of young Europeans.
Young people's ability to distinguish between authenticity and fiction cannot be taken for granted, even in a generation accustomed to digital media. The proliferation of generated content (texts, images, videos) not only creates the risk of "believing falsehoods": it creates a state of epistemic instability in which the very idea of evidence can be challenged and informational trust fragments into group affiliations. This vulnerability is particularly critical for radicalization because it facilitates the construction of closed narrative worlds, where polarization is not an opinion, but a "reality" continually confirmed by consistent signals.
From a policy perspective, this suggests that media literacy must be rethought as a preventive infrastructure, not an episodic intervention. We need literacy that includes amplification logic, financial incentives for attention, manipulative rhetoric, verification techniques, but also emotional and social skills (recognizing when content is designed to provoke outrage and hostility). Prevention depends not only on factual truth, but also on the ability to avoid being captured by antagonistic identity frames.
An emerging policy area concerns the use of conversational AI by young people as interlocutors for personal issues, emotional fragility, and seeking guidance. This dynamic can offer benefits in terms of immediate access and the perception of being listened to, but it poses specific risks for the prevention of radicalization.
The main problem is that radicalization is often a relational process: isolation, humiliation, resentment, and the search for recognition are conditions that can be exploited by extremist communities. If AI becomes a substitute for human relationships or an accelerator of communicative immersion, policy must consider how to implement protective mechanisms: not only content filters, but also mechanisms for "connecting" with qualified human support when signs of vulnerability, violent ideation, or distress emerge. In operational terms, this means integrating referral pathways, defining risk thresholds, ensuring supervision and auditing of responses in sensitive areas, and building interoperability with local services (schools, mental health, social services), avoiding security shortcuts.
AI is often invoked as a solution to the scale of online content. However, in preventing youth radicalization, automated moderation encounters a structural limitation: the difficulty in interpreting context, irony, cultural codes, and communicative intent. This results in false positives (over-removal of legitimate content) and false negatives (failure to remove harmful content), with effects that are not symmetrical: over-removal can erode trust and fuel perceptions of persecution; under-removal can normalize violence and amplify copycat behaviour.
Here comes the crucial point: prevention is not just a matter of technical accuracy, but of social legitimacy. Interventions perceived as opaque, discriminatory, or disproportionate can become risk factors: they push young people toward less regulated spaces, reinforce anti-institutional narratives, and intensify the attraction of "counter-system" communities. It follows that the goal cannot be "maximizing removal," but rather optimizing a balance between safety, rights, and trust.
Effective AI governance in prevention requires a paradigm shift: from managing individual content to managing the ecosystem. This involves at least four approaches.
First, shifting the focus from isolated content to risk patterns that encompass actors and behaviours: invitations to closed channels, rituals of belonging, escalations of dehumanization, and emulation networks. This allows for earlier and more proportionate interventions but requires rigorous safeguards to avoid arbitrary profiling.
Second, prioritize, where possible, aggregate analysis and situational awareness over individual surveillance. In local contexts (schools, cities, community services), the preventive goal is to identify tensions and trends to activate social, educational, and cohesion responses. Individual profiling, besides often being legally and politically problematic, is also fragile in predictive terms: it risks bias and causes reputational damage that is difficult to repair, especially for minors.
Third, incorporate safeguards as enabling conditions: transparency towards stakeholders, explainability of decisions, clear responsibilities along the supply chain (development–procurement–use), channels for appeal, and competent human oversight. These elements do not slow down prevention: they constitute its infrastructure of trust.
Fourth, invest in human and interdisciplinary capabilities. AI is not a substitute for cultural understanding: linguistic skills, knowledge of youth subcultures, developmental psychology, educational work, and the ability to interpret context are needed. Without these resources, AI automates misunderstanding and rigidifies categories, with a paradoxical effect: the more technology, the less ability to see changes.
Within this framework, European Muslim communities can play a decisive role in preventing radicalization. Not simply as targets of security policies, but as civic and cultural actors capable of contributing to the construction of more resilient religious and social spaces.
Effective prevention also requires developing religious and intellectual leadership capable of countering the ideological and politicized readings of Islam that fuel radical narratives. This involves promoting educational and religious contexts in which the Islamic tradition is presented in its theological and spiritual richness, but also in its compatibility with the principles of the rule of law, citizenship, and European pluralism.
In this sense, the contribution of Muslim communities consists not only in rejecting extremist tendencies, but in actively participating in the construction of a European Islam aware of its roots in democratic societies and capable of offering young people non-confrontational and non-ideological models of religious belonging. Islam europeo consapevole del proprio radicamento nelle società democratiche e capace di offrire ai giovani modelli di appartenenza religiosa non conflittuali e non ideologizzati.
AI can strengthen the prevention of youth radicalization only if integrated into a strategy that recognizes the relational nature of the phenomenon and the centrality of legitimacy. Pursuing the promise of total control through automation and widespread surveillance is not only normatively problematic, but strategically fragile: it can lead to alienation, displacement toward opaque spaces, and the consolidation of antagonistic narratives. Conversely, a controlled use of AI - oriented toward aggregated signals, decision-making support, and activation of social responses - can help identify risk patterns early and strengthen resilience, cohesion, and trust. Ultimately, effective prevention does not mean maximum capacity for repression or prediction, but rather the institutional capacity to intervene proportionately, reliably, and credibly, preserving the very democratic conditions that extremism seeks to undermine.
Abdellah M. Cozzolino