Skip to main content

ISoLA 2026 • REoCAS

isola 2026 • reocas

Rigorous Engineering of Collective Adaptive Systems (in the Age of Pervasive AI)

Organizers:

  • Martin Wirsing (LMU München, Germany)
  • Rocco De Nicola (IMT Lucca, Italy)
  • Stefan Jähnichen (TU Berlin, Germany)
  • Catia Trubiani (Gran Sasso Science Institute, Italy

Modern software systems are increasingly distributed, decentralised, and autonomous, operating in dynamic and open-ended environments. These systems are commonly composed of heterogeneous components that include conventional software, formally specified controllers, adaptive mechanisms, and – more recently – artificial intelligence (AI) components. Together, these elements support decision-making, coordination, adaptation, and interaction with humans, often giving rise to complex collective behaviours.

In collective adaptive systems, the interplay between software components, AI-based techniques, and human actors can lead to emergent behaviours that are difficult to predict and analyse. While recent advances in generative AI, offer powerful new capabilities, they also introduce additional sources of uncertainty, such as underspecified behaviour, limited explainability, and deviations from assumed operating conditions. When such components are deployed at scale or in safety-critical and socio-technical contexts, uncertainties may propagate through the system, amplifying the risk of undesirable global behaviour.

Addressing these challenges requires a principled engineering approach grounded in modelling, abstraction, and formal reasoning. Rigorous engineering methodologies, formal methods, and systematic verification and validation techniques play a central role in ensuring that collective adaptive systems remain safe, reliable, and trustworthy. These approaches are essential not only for analysing traditional software components, but also for constraining, monitoring, and integrating AI-based elements within well-defined architectural and behavioural boundaries. In particular, there is a growing need for methods and tools that enable system-level reasoning about safety, security, correctness, reliability, and resilience, while complementing data-driven AI approaches with explicit models, guarantees, and assurances.

This track is a continuation of the successful ISOLA tracks on “Rigorous Engineering,” held in 2014, 2016, 2018, 2020/2021, 2022, and 2024. While earlier editions focused on autonomic ensembles and collective adaptive systems, this 2026 edition explicitly acknowledges the pervasive role of generative and agentic AI and the resulting need to reconcile learning-based techniques with rigorous, formally grounded engineering approaches.

The track provides a forum for presenting research on principled methods for the design, analysis, verification, and operation of collective adaptive systems that may integrate AI components while maintaining strong guarantees about system behaviour.

Topics of interest include (but are not limited to):

  • Formal methods, theoretical foundations, and rigorously grounded applications of machine learning for the design, specification, and analysis of CAS
  • Engineering techniques and methodologies for programming, orchestrating, and operating CAS, including the use of generative and agentic AI approaches
  • Methods and frameworks for adaptation, learning, and dynamic self-expression with formal guarantees and runtime assurances
  • Verification, validation, monitoring, and certification techniques for CAS that may incorporate or rely on AI components
  • Techniques and mechanisms to detect, mitigate, and control emergent misbehavior, unintended interactions, and AI hallucinations
  • Quantitative methods and metrics for the evaluation of CAS
  • Approaches to ensuring and assessing the trustworthiness, security, performance, and resilience of CAS
  • Methods to identify, model, specify, and reason about CAS, possibly including AI-augmented components

 

AISoLA 2026 • AIAP

aisola 2026 • aiap

AI Assisted Programming (AIAP)


Organizers:

  • Wolfgang Ahrendt (Chalmers University of Technology, SE)
  • Bernhard Aichernig (Johannes Kepler University Linz, AT)
  • Klaus Havelund (Jet Propulsion Laboratory, US)

Neural program synthesis, using large language models (LLMs) which are trained on open source code and other artifacts, are rapidly becoming a popular addition to the software developer’s toolbox. LLM driven coding assistants and coding agents can generate code in many different programming languages from natural language requirements. This opens up for fascinating new perspectives, such as increased productivity and accessibility of programming. However, these LLMs have improved considerably in short time, neural systems do not come with guarantees of producing correct, safe, or secure code. They produce the most probable output, based on the training data, and there are countless examples of coherent but erroneous results. Even alert users fall victim to automation bias: the well-studied tendency of humans to be over-reliant on computer generated suggestions. The area of software development is no exception to this automation bias.

This track is devoted to discussions and exchange of ideas on questions like:

  • What are the capabilities of this technology when it comes to software development?
  • What are the limitations?
  • What are the challenges and research areas that need to be addressed?
  • How can we facilitate the rising power of code co-piloting while achieving a high level of correctness, safety, and security?
  • What does the future look like? How should these developments impact future approaches and technologies in software development and quality assurance?
  • What is the role of models, tests, specification, verification, and documentation in conjunction with code co-piloting?
  • Can quality assurance methods and technologies themselves profit from the new power of LLMs?
     

Topics of relevance to this track include the interplay of LLMs with the following areas:

  • Program synthesis
  • Formal specification and verification
  • Model driven development
  • Static analysis
  • Testing
  • Monitoring
  • Documentation
  • Requirements engineering
  • Code explanation
  • Library explanation

 

AISoLA 2026 • FAITH

aisola 2026 • faith
Formal Approaches in Intelligence for Transforming Healthcare

Organizers:

  • Martin Leucker (University of Lübeck, DE)
  • Violet Ka I Pun (Western Norway University of Applied Sciences, NO)

To ensure future high quality health care support within given financial conditions, a digitalization of the healthcare sector is mandatory. The digitalization is implemented either using conventional software development or uses techniques from artificial intelligence and faces two main important challenges: First, health care is a safety critical domain and requires the use of formal methods to ensure that the systems work as required. Second, the use of artificial intelligence is safety critical domains is still not fully understood.

Formal methods build on precise mathematical modelling and analysis to verify a systems correctness. It comprises static and dynamic analysis techniques like model checking, theorem proving, runtime verification, to mention the most prominent ones. Its theoretical foundations have been developed in the past decades buts its application in various domains remains a challenge.

AI in healthcare is transforming the field by improving diagnostics, aiding in medical imaging analysis, personalizing treatment, and supporting clinical decision-making. It enables faster and more accurate analysis of medical data, enhances drug discovery, and assists in robot-assisted surgeries. AI also contributes to predictive analytics, virtual assistants, wearable devices, and clinical decision support. However, it is important to remember that AI is a tool to support healthcare professionals rather than replace them, and ethical considerations and data privacy are crucial in its
implementation.

This track is devoted to discussions and exchange of ideas on questions like:

  • Formal modelling and optimization of hospital workflows
  • Validation and Clinical Implementation: How can algorithms be rigorously tested and integrated into clinical workflows?
  • Robustness and Reliability: How can systems be made robust, reliable, and adaptable to changing patient populations and data quality?
  • Human-AI Collaboration: How can systems effectively collaborate with healthcare professionals?
  • Long-term Impact and Cost-effectiveness: What is the long-term impact and cost-effectiveness of digitalization in healthcare?
  • Explainability and Interpretability: How can AI algorithms be made transparent and understandable to healthcare providers and patients?
  • Data Quality and Integration: How can diverse healthcare data sources be integrated while ensuring data quality and interoperability?
  • Ethical and Legal Considerations: What ethical and legal frameworks should be established to address privacy, consent, bias, and responsible AI use?
  • Regulatory and Policy Frameworks: What regulatory and policy frameworks are needed for the development and deployment of AI in healthcare?

These research questions drive efforts to address technical, ethical, legal, and societal
challenges to maximize the benefits of digital solutions in healthcare.