100€ + 18% VAT
Course Language: English
The testing of traditional (non-AI) systems is well-understood, but AI-based systems, which are becoming more prevalent and critical to our daily lives, introduce exciting new challenges. This workshop introduces the key concepts of Artificial Intelligence (AI) and Machine Learning (ML), and how we test AI-based ML systems.
These systems are typically highly complex, poorly specified, reliant on data, and non-deterministic, which creates many new challenges and opportunities for testing them. However, on top of that, they are also used for making critical decisions, sometimes as part of safety-related systems, and are known to be difficult to trust due to their actual lack of explainability, supposed lack of ethics and susceptibility to bias. We look at several example systems and how their special characteristics make them difficult to test.
The workshop explains how machine-learning is often a key part of AI-based systems and identifies the essential activities of the ML workflow. Machine learning is brought to life and demystified by leading the workshop participants in the building and testing of several simple, but working, machine learning systems using an industry-standard ML development framework.
The three fundamental viewpoints for testing ML systems (Input Data Testing, ML Model Testing, and Development Framework Testing) are described. The test types from each of these viewpoints are explained before example test types from each viewpoint are demonstrated through hands-on exercises.
What you will learn
- The special characteristics of AI and ML systems and how these make the testing of these systems uniquely challenging;
- The three fundamental viewpoints that need to be considered when testing any ML system;
- How to build and test working ML systems without the need for any programming, using example test types from each of the three fundamental ML testing viewpoints.
- 3.5 hours
You can download the TMMi Professional syllabus from the button below.
Stuart Reid is a software testing consultant with over 38 years’ experience in the IT industry, working in development, testing and education. Application areas range from safety-critical to financial and media.
Stuart supports the worldwide testing community in a number of roles. He is convener of the ISO Software Testing Working Group, which has already published several software testing standards and is currently developing new standards in the areas of AI, performance and automotive testing. He founded the International Software Testing Qualifications Board (ISTQB) to promote software testing qualifications globally.
Testing AI and Machine Learning Systems
Introduction to AI and Testing
What is AI and the AI Effect? Narrow, General and Super AI and Technological Singularity. Robots and Software Agents. AI Terminology and Types. Machine Learning vs Conventional Development. AI Hardware and Development Frameworks.
AI System Quality Characteristics
AI-Specific Characteristics. Autonomous and Self-Learning Systems. Side-Effects and Reward Hacking. Bias and Ethics. Trustworthiness and Explainability.
ML and the ML Workflow
ML Types (Classification, Regression, Clustering, Association and Reinforcement Learning). The Machine Learning Workflow. ML Training, Validation and Test Datasets. Overfitting and Underfitting. Machine Learning (ML) Performance Metrics (e.g. accuracy, precision, recall, F1-Score) and their selection.
AI-Specific Testing Issues
Testing Non-Deterministic Systems. Testing Self-Learning Systems. The Test Oracle Problem for AI.
Testing ML Systems Overview
ML Workflow and the System Lifecycle. Fundamental Viewpoints for Testing ML Systems (Input Data Testing, ML Model Testing and Development Framework Testing).
Input Data Testing
Defect Types associated with Input Data. Test types for Input Data Testing (Data Governance Testing, Data Pipeline Testing, Data Provenance Testing, Data Sufficiency Testing, Dataset Constraint Testing, Feature Testing, Label Correctness Testing, Unfair Data Bias Testing).
ML Model Testing
Defect Types associated with ML Models. Test types for Model Testing (A/B Testing, Adversarial Testing, API Testing, Back-to-Back Testing, Boundary Value Analysis, Combinatorial Testing, Ethical System Testing, Exploratory Testing, Fuzz Testing, Metamorphic Testing, Model Bias Testing, Model Documentation Testing, Model Performance Testing*, Model Suitability Testing, Model Validation Testing, Operational Testing, Overfitting Testing, Performance Efficiency Testing, Reward Hacking Testing, Scenario Testing, Side-Effects Testing, Smoke Testing, White-Box Testing of Neural Networks).
Development Framework Testing
Defect Types associated with ML Algorithms and Development Frameworks. Test types for Development Framework Testing (Configuration Testing, Optimization Testing, Performance Testing, Recoverability Testing, Release Testing, Reproducibility Testing, Roll-Back Testing, Security Testing, Suitability Testing).