All Episodes

Displaying 21 - 40 of 50 in total

Episode 21 — Communicating with Humans

Responsible AI requires not just transparency in technical systems but also clear communication that humans can understand and trust. This episode explains the princip...

Episode 22 — Privacy by Design for AI

Privacy by design is the principle of embedding privacy protections into systems from the outset rather than adding them later. This episode introduces its core princi...

Episode 23 — Differential Privacy in Practice

Differential privacy provides mathematical guarantees that individual records cannot be re-identified from aggregated results. This episode introduces its core concept...

Episode 24 — Federated & Edge Approaches

Federated learning and edge AI represent architectural strategies to protect privacy and reduce reliance on centralized data collection. Federated learning trains mode...

Episode 25 — Synthetic Data

Synthetic data is artificially generated to mimic real datasets while reducing reliance on sensitive information. This episode explains how it can protect privacy, exp...

Episode 26 — Retention, Deletion & Data Rights

Responsible AI requires clear practices for how long data is kept, how it is securely deleted, and how organizations honor user rights. This episode defines retention ...

Episode 27 — Threat Modeling for AI Systems

Threat modeling is the process of systematically identifying and prioritizing risks that could compromise AI systems. This episode introduces the core components of th...

Episode 28 — Adversarial ML

Adversarial machine learning focuses on how attackers manipulate AI models and how defenders respond. This episode introduces four major categories of adversarial atta...

Episode 29 — LLM Specific Risks

Large language models (LLMs) present risks distinct from earlier AI systems due to their general-purpose scope and broad deployment. This episode highlights unique thr...

Episode 30 — Content Safety & Toxicity

AI systems that generate or moderate content must address the risk of harmful outputs. This episode introduces content safety as a set of controls designed to prevent ...

Episode 31 — Red Teaming & Safety Evaluations

Red teaming and safety evaluations are proactive practices designed to uncover vulnerabilities and harms in AI systems before they reach users. This episode defines re...

Episode 32 — Hallucinations & Factuality

Large language models frequently generate outputs that sound convincing but are factually incorrect, a phenomenon known as hallucination. This episode introduces hallu...

Episode 33 — Designing Evaluations

Effective evaluation frameworks are essential to ensuring AI systems perform reliably and responsibly. This episode introduces task-grounded evaluations, which measure...

Episode 34 — Human in the Loop

Human-in-the-loop describes oversight models where people remain actively involved in AI decision-making. This episode explains three main approaches: pre-decision ove...

Episode 35 — Monitoring & Drift

Monitoring ensures AI systems continue to perform as intended after deployment, while drift refers to changes in data or environments that degrade accuracy and fairnes...

Episode 36 — Incidents & Postmortems

Even with strong safeguards, AI systems inevitably experience failures or incidents that create harm or expose vulnerabilities. This episode defines incidents as unpla...

Episode 37 — Copyright & Licensing in GenAI

Generative AI raises complex intellectual property questions about both training data and outputs. This episode introduces copyright as legal protection for creators a...

Episode 38 — Provenance & Watermarking

Provenance and watermarking are methods for tracking and identifying AI-generated content. Provenance refers to capturing the history of data or outputs, often through...

Episode 39 — Inclusive & Accessible AI

Inclusivity and accessibility ensure AI systems serve all users equitably, regardless of background, language, or ability. This episode defines inclusivity as designin...

Episode 40 — Choice Architecture & Dark Patterns

Choice architecture refers to how options are presented to users, while dark patterns are manipulative designs that steer users toward decisions not in their best inte...

Broadcast by