202_Ex AI.jpg

Prof. Kowalski Leads Course on Explainable AI in Neural Networks - Now Available via Livestream

Explainable AI in the Neural Network Domain

Lecturer: Prof. Piotr Andrzej Kowalski

Schedule:

  • Thursday, 7th November: 09.00 AM - 01.00 PM | Sala Riunioni MO27 -1st floor
  • Friday, 8th November: 02.00 PM - 06.00 PM | Sala Riunioni MO27-1st floor
  • Monday, 11th November: 02.00 PM - 06.00 PM | Sala Riunioni MO26-1st floor
  • Tuesday, 12th November: 09.00 AM - 01.00 PM | Sala Riunioni MO27-1st floor
  • Thursday, 14th November: Exam (Written Project) | Sala Riunioni MO27-1st floor

Online attendance is also possible.

We kindly ask to fill the form if you are interested in participating.

Program:

This course introduces the principles and techniques of Explainable Artificial Intelligence (XAI) within the context of neural networks. Participants will explore methods to interpret and explain decisions made by neural networks, addressing the "black-box" nature of these models. The course will blend theory and hands-on practice through real-world case studies in domains such as healthcare and environmental monitoring.

Course Modules:

  1. Introduction to Neural Networks and XAI

    • Overview of neural networks' structure and behavior.
    • Understanding the black-box issue and the need for explainability.
  2. Interpretability Techniques

    • Introduction to feature importance, LIME, and SHAP techniques.
    • Methods to interpret complex models and enhance transparency.
  3. Sensitivity Analysis I

    • Explore gradient-based and perturbation-based sensitivity analysis to measure how inputs influence neural network predictions.
  4. Sensitivity Analysis II

    • Learn about global and local sensitivity analysis methods, which quantify the importance of features at different levels.
  5. XAI Techniques in Practice

    • Practical application of XAI in areas like healthcare and environmental studies, offering case studies and real-world examples.
  6. Ethical Considerations in XAI

    • Addressing bias, fairness, and transparency in AI models.
    • Ethical concerns in deploying AI systems, ensuring responsible use.
  7. Future Trends in XAI and Neural Networks

    • Exploration of emerging trends and research directions in XAI and their potential impact on AI technologies.

By the end of the course, attendees will gain a deeper understanding of:

  • Neural Networks and the importance of explainability.
  • Key interpretability techniques such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations).
  • Sensitivity Analysis including both global and local methods.
  • Practical applications of XAI in fields such as medicine and air pollution.
  • Ethical concerns, addressing bias and fairness in AI models.

Exam:
The final exam will be in the form of a written project, applying the techniques learned throughout the course.

CFD: 4

We kindly ask to fill the form if you are interested in participating.