AI Governance Foundations
About the Course
This introduces AI fundamentals, governance principles, and global regulations, while covering lifecycle governance, risk assessment, and responsible AI tools. It combines theory with case studies, practical exercises, and an assessment to equip participants with actionable skills in AI governance.
Duration: 16 hours
Mode: Classroom or Online Session
Foundations of AI and AI Governance
What is AI? (Basics of ML, NLP, computer vision, etc.)
Difference between AI, ML, and automation
Why AI governance matters
Risks and opportunities of AI
Definitions: ethical AI, responsible AI, trustworthy AI
Principles of Responsible AI
Core principles:
Fairness
Transparency
Accountability
Privacy
Safety and robustness
Human oversight
Aligning with company values or public standards (e.g., OECD, EU AI Act, ISO 42001)
AI Regulations and Compliance
Overview of global AI regulations:
EU AI Act
GDPR (as it applies to automated decision-making)
NIST AI Risk Management Framework
ISO/IEC 42001 (AI management systems)
Industry-specific regulations (e.g., finance, healthcare, HR)
Obligations for high-risk and general-purpose AI systems
Governance Structures and Roles
What is AI governance?
Internal governance frameworks
Role of:
AI ethics board
Data protection officer
AI product managers
Model risk management teams
Decision rights and escalation processes
AI Lifecycle Governance
How to govern AI at each stage:
Data collection & labeling
Model design and development
Testing and validation
Deployment and integration
Monitoring and retirement
Tools for governance:
Model cards
Datasheets for datasets
Audit trails
Version control and documentation
Risk Management and Impact Assessment
How to identify, assess, and mitigate AI risks:
Bias and discrimination
Explainability gaps
Security vulnerabilities
Model drift
Introduction to:
Algorithmic Impact Assessments (AIAs)
Data protection impact assessments (DPIAs)
Risk scoring and categorization (e.g., low-, medium-, high-risk AI)
Tools and Technologies for Responsible AI
Fairness and bias detection tools (e.g., Fairlearn, Aequitas)
Explainability tools (e.g., SHAP, LIME)
Model monitoring platforms (e.g., Fiddler, WhyLabs)
Governance platforms (e.g., Credo AI, Arthur AI)
Human Oversight and Escalation Paths
Designing “human-in-the-loop” systems
Decision override and review processes
Red flags for escalation
Incident response playbooks for AI errors
Communication and Transparency
Explaining AI decisions to stakeholders
Communicating AI limitations and risks
Disclosures and user notices
Engaging external stakeholders (e.g., regulators, customers)
Case Studies and Practical Exercises
Real-world failures (e.g., biased recruiting AI, facial recognition misuse)
Hands-on exercises:
Performing a model risk assessment
Designing an AI review checklist
Simulating an AI incident response
Business leaders & executives (CEOs, founders, board members)
Policy makers, regulators & compliance officers
AI/ML engineers, data scientists & product managers
Model risk managers & AI product leads in regulated industries
AI ethics board members & governance professionals
Data protection officers (DPOs)
Academics, students & researchers in AI, law, or ethics
Professionals in high-risk sectors (finance, healthcare, HR, defense, education)
What you will learn
Understand the basics of AI, ML, NLP, and computer vision.
Differentiate between AI, ML, and automation.
Gain knowledge of ethical, responsible, and trustworthy AI principles.
Learn the core principles of Responsible AI: fairness, transparency, accountability, privacy, safety, robustness, and human oversight.
Familiarize with global AI regulations and compliance frameworks (EU AI Act, GDPR, NIST AI RMF, ISO/IEC 42001).
Understand industry-specific AI obligations (finance, healthcare, HR).
Learn to design internal AI governance structures (ethics boards, DPO roles, escalation processes).
Manage the AI lifecycle governance: data, model development, testing, deployment, monitoring, and retirement.
Apply tools like model cards, datasheets, audit trails, and version control for AI governance.
Conduct AI risk assessments (bias, explainability, security, model drift).
Use AI fairness, bias detection, and explainability tools (e.g., SHAP, LIME, Fairlearn).
Develop human-in-the-loop systems and escalation processes.
Improve communication and transparency of AI decisions with stakeholders.
Analyze real-world AI failures and apply lessons learned.
Perform hands-on exercises: model risk assessments, AI review checklists, and incident response simulations.
Ganesh Kannan(PMP) has more than 15 years of IT experience in Software testing, test Consulting, Project and Change management. He has worked for Investment Banks like Barclays Capital and IT services firm like Zensar Technologies. He has managed the testing tools and process function for a top tier investment bank and have managed large off-shore testing teams. He possess extensive project management and consulting experience in delivering IT applications and spearheads the classroom Fundamentals of software testing in Singapore.