Krunoslav ANTOLIŠ


This paper investigates the intricate interplay between artificial intelligence (AI) and the proliferation of disinformation, presenting a strategic framework to mitigate its impact in the digital era. The study encompasses diverse domains, contributing to a comprehensive grasp of AI-driven disinformation. It delves into the dynamics of disinformation, scrutinizing the mechanisms of creation and dissemination, with a particular focus on the role played by AI. The methods employed involve analyzing AI involvement, enhancing algorithms for real-time recognition and analysis of disinformation, and exploring social dynamics and human behavior. The research unveils the tactics employed by malicious entities, such as fabricating misleading naratives and
manipulating information. Agile AI algorithms have been devised for assessing credibility, tracking geolocation, and implementing ethical privacy protection measures. The implications encompass identifying social structures and cognitive vulnerabilities, leading to the development of targeted interventions. An AI-centric detection approach revolves around refining algorithms for real-time identification of disinformation, emphasizing credibility assessment, geolocation tracking, and privacy protection measures. The
aim is to fortify systems with the capability to swiftly detect disinformation. The assessment of social and psychological factors delves into the influence of social structures, group dynamics, and cognitive biases on the propagation of disinformation. Educational programs are being formulated to enhance awareness and critical thinking, with strategies tailored to address specific vulnerabilities. Cross-sectoral collaboration underscores the importance of information exchange between sectors, pooling expert
knowledge, and establishing communication channels. Collaborative efforts with technology companies, educational institutions, and others enable a comprehensive approach to combat disinformation. Balancing regulation and fundamental rights grapples with the challenges of preserving freedom of speech and privacy. Defining the equilibrium of legal frameworks, considering the global context and the dynamic nature of technology, is essential. Transparency and ethical considerations play a pivotal role in regulatory measures. Public awareness and education initiatives aim to reduce susceptibility to disinformation. Awareness campaigns
inform about the existence of disinformation, while educational programs foster media literacy and critical thinking skills. Evaluation involves measuring the level of awareness and assessing changes in behavior. In summary, this research offers insights for a holistic approach to address the challenges posed by AI-driven disinformation. The proposed framework encourages interdisciplinary collaboration, underscores ethical considerations in regulation, and advocates for education and awareness.