TrAC Interpretability in AI
The Interpretability in AI digital badge is designed to provide a basic understanding of interpretability, including an overview of basic concepts of interpretability, interpretable models, model-agnostic methods, and example-based explanations. The course is intended for a broad audience within the spectrum of the software and technology industry, including software engineers, data scientists, data engineers, data analysts, research scientists, and software developers. The Interpretability in AI course is offered by Iowa State University's Translational AI Center (TrAC) and is part of a larger Foundations in AI pathway program.
Skills / Knowledge
- Artificial Intelligence
- Explainable AI
- Python
- Machine Learning
- PyTorch
Earning Criteria
Required
The Interpretability in AI Badge is earned after successful completion of a 4-week, asynchronous, self-paced online course consisting of 4 modules. The 4 modules cover essential topics, including an overview of interpretable and explainable machine learning algorithms with application to real-world problems.
This course offers a blend of hands-on activities, assignments, video lectures and tutorials.
Learning Outcomes:
Formulate a machine learning problem with interpretable models based on the specific task
Develop basic interpretable machine learning models using deep learning packages such as Scikit-Learn and PyTorch
Develop model-agnostic methods for the interpretability in black-box machine learning models
Develop example-based explanations for the interpretability in black-box machine learning models
Assessment:
Participants will be assessed on:
Engagement with each module
Two coding exercises that include implementing Python codes and coding interpretable models and model-agnostic methods with LIME and SHAP
2 quizzes assessing basic and advanced knowledge and concepts of interpretability in AI
About TrAC
The Translational AI Center will break down disciplinary silos to bring together core Iowa State artificial intelligence researchers and subject matter experts interested in applying new technologies to their work. The center will initially focus on conducting core artificial intelligence research, as well as pursuing five application areas of artificial intelligence:
Materials design and manufacturing
Biology, healthcare, and quality of life
Autonomy, intelligent transportation, and smart infrastructure
Food, energy, and water
Ethics, fairness, and adoption.
In addition to serving as a scientific hub for translational artificial intelligence, the center will organize research seminars, host workshops, training, and onboarding programs, offer seed funding for research projects, and serve as an intermediary between private industry partners seeking research services and appropriate university faculty.