Technical Tracks

IRAI 2026 focuses on advancing Artificial Intelligence systems that are not only technically innovative but also responsible, trustworthy, secure, and deployable in real-world environments. Submissions addressing transparency, safety, accountability, robustness, and governance aspects of AI systems across all technical domains are especially encouraged.

The organizing committee cordially invites high-quality papers representing original work, including but not limited to the following technical tracks:

  • TT01 - AI Models and Algorithms: Machine learning and deep learning methods, generative AI and large language models, foundation models, multimodal learning, reinforcement learning, representation learning, trustworthy model training, uncertainty estimation, explainable model architectures, model evaluation, robustness, and benchmarking.

  • TT02 - Industrial Applications of AI: AI for manufacturing and Industry 4.0, predictive maintenance, industrial automation, intelligent process optimization, smart infrastructure, quality inspection, AI-driven decision support systems, deployment of AI in regulated environments, and real-world industrial case studies.

  • TT03 - Data Centers and AI Infrastructure: Scalable AI training and inference infrastructure, distributed and parallel AI systems, cloud and edge AI platforms, GPU and accelerator optimization, energy-efficient AI systems, trustworthy and secure AI infrastructure, AI lifecycle management systems, and reliable deployment of foundation models.

  • TT04 - AI Safety and Security: Robust and safe AI system design, adversarial machine learning, secure model deployment, privacy-preserving AI, threat detection and mitigation, protection against prompt injection and model manipulation, secure AI pipelines, and AI risk management and assurance.

  • TT05 - Physical AI and Robotics: Autonomous robotic systems, embodied AI, perception and control, robot learning, industrial robotics, autonomous navigation, human-robot interaction, AI for cyber-physical systems, and safe and reliable deployment of AI in physical environments.

  • TT06 - AI Ethics and Society: Responsible AI design and deployment, fairness and bias mitigation, explainability and transparency, accountability frameworks, governance and regulatory compliance, auditability and assurance, societal impacts of AI, human-centered AI, and trustworthy AI systems.

  • TT07 - AI in Education: Intelligent tutoring systems, generative AI in education, AI-assisted learning platforms, automated assessment systems, personalized learning environments, AI literacy, responsible deployment of AI in education, and AI-supported teaching and learning innovation.

  • TT08 - Multidisciplinary Practice of AI: Cross-domain AI applications in healthcare, finance, energy, sustainability, smart cities, and public systems; interdisciplinary AI systems; human-AI collaboration; digital twins; and integration of AI into complex real-world operational environments.


Accepted Special Sessions

IRAI 2026 is now accepting papers for the following approved special sessions:


SS01 - Responsible Decentralised Agentic Systems (ReDAS): Operationalising Responsibility and Accountability of AI Agents

Special Session co-chairs:
Svetlana Bialkova, Sofia University St. Kliment Ohridski, Bulgaria
Simeon J. Simoff, Western Sydney University, Australia
Maria Vanina Martinez, Spanish National Research Council (IIIA-CSIC), Spain

Operationalising responsible AI spans research and development in the field from “ethics-bydesign" to “assurance-by-verification.” The special session is focused on a decentralised agentic system, where autonomous agents propose, argue and make decisions in a decentralised environment, subject to global AI regulatory compliance (EU AI Act, NIST RMF) and the data protection legislation (Australian Privacy Act, EU GDPR). While formal ethical principles are well-established, there is a demand for methods rigorous operational proof. The design of the session links theoretical developments with systems engineering, legal, and sociotechnical challenges of building verifiable Responsible AI (RAI) systems

We invite researchers and practitioners from the fields of agentic AI systems, complex systems, value-aligned systems, decentralised computing, AI governance and data sovereignty to explore the following interconnected pillars:

  • technical robustness of RDAS, focusing on deviation and hallucination detection, and consensus fail-safes for agentic workflows;
  • consensus in decentralised agentic systems, with embedded argumentation mechanisms for decision-making;
  • algorithmic fairness in value-aligned systems, including empirical bias mitigation and “unlearning” techniques for values adjustments;
  • human-interpretable XAI for actionable transparency in RDAS; and
  • automated compliance auditing for RDAS.

The session features peer-reviewed presentations and a “horizon” panel including AI research, development and legal experts on practical directions and strategies to progress accountable autonomous systems in ways that human agency remains non-negotiable.

SS02 - Ethical and Responsible AI Practices in Primary and Secondary Education

Special Session Chair:
Rakshit Jain, IEEE Pune Section and PTC, India

Artificial Intelligence has rapidly penetrated educational ecosystems, including pre-university schooling, through generative AI tutors, automated assessment tools, and personalized learning platforms. However, the deployment of AI in this formative stage remains largely unregulated and unstandardized. While higher education has begun discussing responsible AI use, pre-university education—where cognitive, ethical, and social foundations are shaped—has received minimal structured attention. This represents an alarming gap with long-term societal consequences.

Responsible AI in education must focus on enhancing rather than replacing human learning, teaching, and critical thinking. Core principles include transparency in AI usage, preservation of academic integrity, equitable access, data privacy, and mitigation of algorithmic bias. Students should be guided to use AI for support—brainstorming, feedback, and exploration—without outsourcing reasoning or creativity. Faculty must employ AI to scaffold learning, personalize pedagogy, and reduce administrative burden while maintaining human oversight.

This special session seeks to bring together researchers, educators, policymakers, and technologists to develop actionable standards and governance models for Responsible AI in pre-university education. The topic is directly aligned with the mission of IRAI to promote trustworthy, ethical, and socially beneficial AI.

At present, such structured discourse is not adequately considered in many regions, despite AI becoming deeply embedded in school learning environments. This session therefore responds to an urgent global need, and we look forward with high hopes that IRAI 2026 can become a pioneering platform to initiate this critical movement.

Topics of interest include, but are not limited to:

  • Bias and fairness in AI tutoring systems
  • Data privacy and protection of minors
  • Responsible use of generative AI in school learning environments
  • AI literacy for teachers and students
  • Transparency and explainability in educational AI systems
  • Assessment redesign in the age of AI
  • Human-centered AI-assisted learning
  • Governance and regulatory frameworks for AI in education
  • Ethical and societal implications of AI in schools