Listen now!
LuminaTalks
Listen now!
LuminaTalks
Listen now!
LuminaTalks
Welcome to LuminaTalks. In Good Company, Your Shot of Tech Dopamine!
Hosted by Kevin De Pauw, LuminaTalks dives deep into the evolving world of AI, data strategy, and digital innovation. From AI agents and cybersecurity to data platforms and the ethics of emerging tech, we explore the power moves and pitfalls shaping the future of intelligent systems.
Whether you're a tech leader, founder, or just curious about the real-world impact of AI, LuminaTalks delivers clear insights, bold opinions, and practical takeaways to help you stay ahead.



New bi-weekly episodes on
Youtube and Spotify
New bi-weekly episodes on
Youtube and Spotify
New bi-weekly episodes on
Youtube and Spotify
Latest Episodes

SEASON FINALE
EP
10
-
Season 1
The Dark Side of AI Supply Chains: Inside HuggingFace Risks & Real Exploits
What if the models fueling innovation are already compromised? Hyrum Anderson (Cisco) reveals how AI supply chain vulnerabilities, model trojans, and open-source AI exploits are reshaping cybersecurity. Learn about behavior inheritance, model provenance, and how to secure foundation models and LLMOps pipelines. A critical guide for AI security teams, CISOs, and developers ensuring trust in AI from model to market.

SEASON FINALE
EP
10
-
Season 1
The Dark Side of AI Supply Chains: Inside HuggingFace Risks & Real Exploits
What if the models fueling innovation are already compromised? Hyrum Anderson (Cisco) reveals how AI supply chain vulnerabilities, model trojans, and open-source AI exploits are reshaping cybersecurity. Learn about behavior inheritance, model provenance, and how to secure foundation models and LLMOps pipelines. A critical guide for AI security teams, CISOs, and developers ensuring trust in AI from model to market.

SEASON FINALE
EP
10
-
Season 1
The Dark Side of AI Supply Chains: Inside HuggingFace Risks & Real Exploits
What if the models fueling innovation are already compromised? Hyrum Anderson (Cisco) reveals how AI supply chain vulnerabilities, model trojans, and open-source AI exploits are reshaping cybersecurity. Learn about behavior inheritance, model provenance, and how to secure foundation models and LLMOps pipelines. A critical guide for AI security teams, CISOs, and developers ensuring trust in AI from model to market.
More Episodes

EP
10
-
Season 1
The Dark Side of AI Supply Chains: Inside HuggingFace Risks & Real Exploits
What if the models fueling innovation are already compromised? Hyrum Anderson (Cisco) reveals how AI supply chain vulnerabilities, model trojans, and open-source AI exploits are reshaping cybersecurity. Learn about behavior inheritance, model provenance, and how to secure foundation models and LLMOps pipelines. A critical guide for AI security teams, CISOs, and developers ensuring trust in AI from model to market.

EP
9
-
Season 1
The Silent Killer of AI Projects: MLOps Done Wrong
Behind the AI hype lies a scaling crisis. We reveal how MLOps and LLMOps frameworks bridge the gap between AI prototypes and enterprise deployment. Learn the cultural, compliance, and architectural shifts that make teams production-ready. Perfect for AI leaders, DevOps engineers, and risk officers driving responsible AI implementation and model governance in 2025.

EP
8
-
Season 1
Why Most AI Projects Crash and Burn, and How MLOps/LLMOps Fix It!
Most AI projects die before reaching production. In this episode, we dissect why—with lessons on MLOps, LLMOps, and AI governance that turn prototypes into production-ready solutions. Discover how to align data science and engineering, prevent model drift, and ensure AI compliance. Packed with real-world insights for CTOs, data scientists, and ML engineers ready to scale GenAI systems safely.

EP
7
-
Season 1
AI That Thinks Ethically: Can We Build Agents That Actually Learn?
Are we building AI that helps people—or just imitates them? Geertrui Mieke De Ketelaere (Vlerick, ex-Imec) joins to explore ethical AI, human-centered design, and AI governance. We discuss responsible AI companions, sustainable AI deployment, and how empathy and transparency build digital trust. For leaders, educators, and developers shaping a more ethical and sustainable AI future.

EP
6
-
Season 1
What If AI Could Think Together? Inside the Internet for AI Agents (ACP)
Can AI agents collaborate across ecosystems? IBM Research experts Sandi Besen and Aakriti Aggarwal unpack the Agent Communication Protocol (ACP)—the “HTTP for AI agents.” Explore multi-agent systems, BeeAI framework, agent lifecycle management, and compliance by design in distributed AI. From ESG reporting to travel automation, discover how ACP and open-source standards are shaping the next wave of AI interoperability and digital trust.

EP
5
-
Season 1
Are Humans Getting Dumber as AI Gets Smarter? (MCP, A2A Explained)
AI-to-AI communication is here—and it’s changing everything. In this follow-up, Vineeth Sai Narajala explores the rise of MCP, A2A, and the Agent Name Service (ANS) powering the Internet of Agents. Learn how to prevent rug-pull attacks, tool poisoning, and token abuse, while ensuring governance, auditability, and human-in-the-loop safety. Perfect for anyone building secure, autonomous, and transparent AI systems.

EP
4
-
Season 1
The Hidden Security Crisis in Your AI Tools, MCP explained w. Vineeth S. Narajala
As AI agents evolve, so do their risks. Security researcher Vineeth Sai Narajala (OWASP, AWS) joins to reveal vulnerabilities in MCP and A2A systems—from token theft and prompt injection to tool poisoning. Discover how to secure agentic AI, implement zero-trust architectures, and apply OWASP GenAI Security Principles. This is essential listening for teams deploying GenAI tools into production environments.

EP
3
-
Season 1
Turn Compliance Into a Competitive Edge, with Marc Dekeyser
Compliance isn’t just paperwork—it’s your growth engine. Marc Dekeyser (Microsoft) joins to discuss how compliance by design, trust in tech, and secure cloud innovation turn regulations into opportunity. Explore data governance, sovereignty, and continuous compliance as competitive advantages. From startups to enterprises, learn how building trust and transparency creates long-term digital resilience.

EP
2
-
Season 1
Killing Legacy Tech: How MCP Is Redefining AI Systems, with Xavier Geerinck
The Model Context Protocol (MCP) is redefining how AI agents communicate and share context. With guest expert Xavier Geerinck, we explore agent interoperability, AI governance, and decentralized AI in practice. Learn how MCP enables context-aware automation while balancing privacy, compliance, and cost-effective deployment. A must-listen for AI architects, CTOs, and innovators shaping the multi-agent future of intelligent systems.

EP
1
-
Season 1
From Control to Freedom: Why Smart Companies Are Embracing Decentralization
Messy data? Disconnected tools? In our debut episode, we unpack how data governance, decentralized architectures, and AI-driven interoperability can turn chaos into growth. Discover the “Lego Principle” for scalable systems, the cost of SaaS lock-in, and why data sovereignty matters more than ever. Hosted by Leonardo Minatti and Kevin De Pauw, this episode bridges business compliance, digital trust, and AI integration for leaders scaling smarter.

EP
10
-
Season 1
The Dark Side of AI Supply Chains: Inside HuggingFace Risks & Real Exploits
What if the models fueling innovation are already compromised? Hyrum Anderson (Cisco) reveals how AI supply chain vulnerabilities, model trojans, and open-source AI exploits are reshaping cybersecurity. Learn about behavior inheritance, model provenance, and how to secure foundation models and LLMOps pipelines. A critical guide for AI security teams, CISOs, and developers ensuring trust in AI from model to market.

EP
9
-
Season 1
The Silent Killer of AI Projects: MLOps Done Wrong
Behind the AI hype lies a scaling crisis. We reveal how MLOps and LLMOps frameworks bridge the gap between AI prototypes and enterprise deployment. Learn the cultural, compliance, and architectural shifts that make teams production-ready. Perfect for AI leaders, DevOps engineers, and risk officers driving responsible AI implementation and model governance in 2025.

EP
8
-
Season 1
Why Most AI Projects Crash and Burn, and How MLOps/LLMOps Fix It!
Most AI projects die before reaching production. In this episode, we dissect why—with lessons on MLOps, LLMOps, and AI governance that turn prototypes into production-ready solutions. Discover how to align data science and engineering, prevent model drift, and ensure AI compliance. Packed with real-world insights for CTOs, data scientists, and ML engineers ready to scale GenAI systems safely.

EP
7
-
Season 1
AI That Thinks Ethically: Can We Build Agents That Actually Learn?
Are we building AI that helps people—or just imitates them? Geertrui Mieke De Ketelaere (Vlerick, ex-Imec) joins to explore ethical AI, human-centered design, and AI governance. We discuss responsible AI companions, sustainable AI deployment, and how empathy and transparency build digital trust. For leaders, educators, and developers shaping a more ethical and sustainable AI future.

EP
6
-
Season 1
What If AI Could Think Together? Inside the Internet for AI Agents (ACP)
Can AI agents collaborate across ecosystems? IBM Research experts Sandi Besen and Aakriti Aggarwal unpack the Agent Communication Protocol (ACP)—the “HTTP for AI agents.” Explore multi-agent systems, BeeAI framework, agent lifecycle management, and compliance by design in distributed AI. From ESG reporting to travel automation, discover how ACP and open-source standards are shaping the next wave of AI interoperability and digital trust.

EP
5
-
Season 1
Are Humans Getting Dumber as AI Gets Smarter? (MCP, A2A Explained)
AI-to-AI communication is here—and it’s changing everything. In this follow-up, Vineeth Sai Narajala explores the rise of MCP, A2A, and the Agent Name Service (ANS) powering the Internet of Agents. Learn how to prevent rug-pull attacks, tool poisoning, and token abuse, while ensuring governance, auditability, and human-in-the-loop safety. Perfect for anyone building secure, autonomous, and transparent AI systems.

EP
4
-
Season 1
The Hidden Security Crisis in Your AI Tools, MCP explained w. Vineeth S. Narajala
As AI agents evolve, so do their risks. Security researcher Vineeth Sai Narajala (OWASP, AWS) joins to reveal vulnerabilities in MCP and A2A systems—from token theft and prompt injection to tool poisoning. Discover how to secure agentic AI, implement zero-trust architectures, and apply OWASP GenAI Security Principles. This is essential listening for teams deploying GenAI tools into production environments.

EP
3
-
Season 1
Turn Compliance Into a Competitive Edge, with Marc Dekeyser
Compliance isn’t just paperwork—it’s your growth engine. Marc Dekeyser (Microsoft) joins to discuss how compliance by design, trust in tech, and secure cloud innovation turn regulations into opportunity. Explore data governance, sovereignty, and continuous compliance as competitive advantages. From startups to enterprises, learn how building trust and transparency creates long-term digital resilience.

EP
2
-
Season 1
Killing Legacy Tech: How MCP Is Redefining AI Systems, with Xavier Geerinck
The Model Context Protocol (MCP) is redefining how AI agents communicate and share context. With guest expert Xavier Geerinck, we explore agent interoperability, AI governance, and decentralized AI in practice. Learn how MCP enables context-aware automation while balancing privacy, compliance, and cost-effective deployment. A must-listen for AI architects, CTOs, and innovators shaping the multi-agent future of intelligent systems.

EP
1
-
Season 1
From Control to Freedom: Why Smart Companies Are Embracing Decentralization
Messy data? Disconnected tools? In our debut episode, we unpack how data governance, decentralized architectures, and AI-driven interoperability can turn chaos into growth. Discover the “Lego Principle” for scalable systems, the cost of SaaS lock-in, and why data sovereignty matters more than ever. Hosted by Leonardo Minatti and Kevin De Pauw, this episode bridges business compliance, digital trust, and AI integration for leaders scaling smarter.



