Listen now!

LuminaTalks

Listen now!

LuminaTalks

Listen now!

LuminaTalks

Welcome to LuminaTalks. In Good Company, Your Shot of Tech Dopamine!

Hosted by Kevin De Pauw, LuminaTalks dives deep into the evolving world of AI, data strategy, and digital innovation. From AI agents and cybersecurity to data platforms and the ethics of emerging tech, we explore the power moves and pitfalls shaping the future of intelligent systems.



Whether you're a tech leader, founder, or just curious about the real-world impact of AI, LuminaTalks delivers clear insights, bold opinions, and practical takeaways to help you stay ahead.

Spotify Logo
Spotify Logo
Spotify Logo
Youtube Logo
Youtube Logo
Youtube Logo
LuminaTalks Logo
LuminaTalks Logo
LuminaTalks Logo
Hero Image
Hero Image
Hero Image

New bi-weekly episodes on

Youtube and Spotify

New bi-weekly episodes on

Youtube and Spotify

New bi-weekly episodes on

Youtube and Spotify

Latest Episodes

SEASON FINALE

EP

6

-

Season 2

OWASP Expert reveals hidden vulnerabilities in AI agents | Vineeth Sai Narajala #S02E06

AI is no longer just generating answers. It’s executing tasks, using tools, interacting with other agents, and operating with autonomy.

SEASON FINALE

EP

6

-

Season 2

OWASP Expert reveals hidden vulnerabilities in AI agents | Vineeth Sai Narajala #S02E06

AI is no longer just generating answers. It’s executing tasks, using tools, interacting with other agents, and operating with autonomy.

SEASON FINALE

EP

6

-

Season 2

OWASP Expert reveals hidden vulnerabilities in AI agents | Vineeth Sai Narajala #S02E06

AI is no longer just generating answers. It’s executing tasks, using tools, interacting with other agents, and operating with autonomy.

More Episodes

All
Season 1
Season 2
All
Season 1
Season 2

EP

6

-

Season 2

OWASP Expert reveals hidden vulnerabilities in AI agents | Vineeth Sai Narajala #S02E06

AI is no longer just generating answers. It’s executing tasks, using tools, interacting with other agents, and operating with autonomy.

EP

5

-

Season 2

Only 0.3% Founders will Reach €10M in Revenue | Jürgen Ingels Explains Why #S02E05

Modern startups are being built with smaller teams, higher profitability, and radically different operating models. Yet Europe is falling behind in tech adoption, and how automation and AI are already influencing investment decisions long before a founder walks into a room. Why?

EP

4

-

Season 2

The BIGGEST Mistake You're Making with AI Right Now | Petar Tsankov #S02E04

We unpack why traditional, checkbox-driven AI compliance is breaking down, and why governance can no longer live in policies, audits, or PDFs. As AI systems scale faster than regulation, enterprises are being forced to rethink compliance as an engineering problem, not a legal one. This episode goes deep into how AI governance must shift from documentation to execution.

EP

3

-

Season 2

AI Safety Expert: When AI Understands You Better Than You Realise | Dr. Marta Bieńkiewicz #S2EP3

We explore the ethics, tech and governance of AI agents, from VR neuro-rehab to delegation, identity, and whether agents can (or should) act on our behalf.  We are entering the era of agentic AI: autonomous systems that don’t just execute instructions but represent you in decisions, negotiations, and even relationships.

EP

2

-

Season 2

The AI Regulation Expert: Your Last Wake-Up Call on EU AI Act Rules | Kai Zenner

Kai Zenner - one of the key minds shaping Europe’s AI policy - sat down with me for a real, no-BS conversation about how the AI Act might actually strengthen founders instead of slowing them down.

EP

1

-

Season 2

U Inc: Europe's Plan to Finally Unleash its Startups | 28th Regime with Greta Koch & Iwoana Biernat

What if Europe had one single system to start, fund, and scale startups? In this special episode, Greta Koch (European Parliament) and Iwona Anna Biernat (Project Europe) join host Kevin De Pauw to unpack the vision behind EU-INC! The  “28th regime” designed to unify European company law and accelerate innovation.

Episode 10

EP

10

-

Season 1

The Dark Side of AI Supply Chains: Inside HuggingFace Risks & Real Exploits

What if the models fueling innovation are already compromised? Hyrum Anderson (Cisco) reveals how AI supply chain vulnerabilities, model trojans, and open-source AI exploits are reshaping cybersecurity. Learn about behavior inheritance, model provenance, and how to secure foundation models and LLMOps pipelines. A critical guide for AI security teams, CISOs, and developers ensuring trust in AI from model to market.

Episode 9

EP

9

-

Season 1

The Silent Killer of AI Projects: MLOps Done Wrong

Behind the AI hype lies a scaling crisis. We reveal how MLOps and LLMOps frameworks bridge the gap between AI prototypes and enterprise deployment. Learn the cultural, compliance, and architectural shifts that make teams production-ready. Perfect for AI leaders, DevOps engineers, and risk officers driving responsible AI implementation and model governance in 2025.

Episode 8

EP

8

-

Season 1

Why Most AI Projects Crash and Burn, and How MLOps/LLMOps Fix It!

Most AI projects die before reaching production. In this episode, we dissect why—with lessons on MLOps, LLMOps, and AI governance that turn prototypes into production-ready solutions. Discover how to align data science and engineering, prevent model drift, and ensure AI compliance. Packed with real-world insights for CTOs, data scientists, and ML engineers ready to scale GenAI systems safely.

Episode 7

EP

7

-

Season 1

AI That Thinks Ethically: Can We Build Agents That Actually Learn?

Are we building AI that helps people—or just imitates them? Geertrui Mieke De Ketelaere (Vlerick, ex-Imec) joins to explore ethical AI, human-centered design, and AI governance. We discuss responsible AI companions, sustainable AI deployment, and how empathy and transparency build digital trust. For leaders, educators, and developers shaping a more ethical and sustainable AI future.

Episode 6

EP

6

-

Season 1

What If AI Could Think Together? Inside the Internet for AI Agents (ACP)

Can AI agents collaborate across ecosystems? IBM Research experts Sandi Besen and Aakriti Aggarwal unpack the Agent Communication Protocol (ACP)—the “HTTP for AI agents.” Explore multi-agent systems, BeeAI framework, agent lifecycle management, and compliance by design in distributed AI. From ESG reporting to travel automation, discover how ACP and open-source standards are shaping the next wave of AI interoperability and digital trust.

Episode 5

EP

5

-

Season 1

Are Humans Getting Dumber as AI Gets Smarter? (MCP, A2A Explained)

AI-to-AI communication is here—and it’s changing everything. In this follow-up, Vineeth Sai Narajala explores the rise of MCP, A2A, and the Agent Name Service (ANS) powering the Internet of Agents. Learn how to prevent rug-pull attacks, tool poisoning, and token abuse, while ensuring governance, auditability, and human-in-the-loop safety. Perfect for anyone building secure, autonomous, and transparent AI systems.

EP

6

-

Season 2

OWASP Expert reveals hidden vulnerabilities in AI agents | Vineeth Sai Narajala #S02E06

AI is no longer just generating answers. It’s executing tasks, using tools, interacting with other agents, and operating with autonomy.

EP

5

-

Season 2

Only 0.3% Founders will Reach €10M in Revenue | Jürgen Ingels Explains Why #S02E05

Modern startups are being built with smaller teams, higher profitability, and radically different operating models. Yet Europe is falling behind in tech adoption, and how automation and AI are already influencing investment decisions long before a founder walks into a room. Why?

EP

4

-

Season 2

The BIGGEST Mistake You're Making with AI Right Now | Petar Tsankov #S02E04

We unpack why traditional, checkbox-driven AI compliance is breaking down, and why governance can no longer live in policies, audits, or PDFs. As AI systems scale faster than regulation, enterprises are being forced to rethink compliance as an engineering problem, not a legal one. This episode goes deep into how AI governance must shift from documentation to execution.

EP

3

-

Season 2

AI Safety Expert: When AI Understands You Better Than You Realise | Dr. Marta Bieńkiewicz #S2EP3

We explore the ethics, tech and governance of AI agents, from VR neuro-rehab to delegation, identity, and whether agents can (or should) act on our behalf.  We are entering the era of agentic AI: autonomous systems that don’t just execute instructions but represent you in decisions, negotiations, and even relationships.

EP

2

-

Season 2

The AI Regulation Expert: Your Last Wake-Up Call on EU AI Act Rules | Kai Zenner

Kai Zenner - one of the key minds shaping Europe’s AI policy - sat down with me for a real, no-BS conversation about how the AI Act might actually strengthen founders instead of slowing them down.

EP

1

-

Season 2

U Inc: Europe's Plan to Finally Unleash its Startups | 28th Regime with Greta Koch & Iwoana Biernat

What if Europe had one single system to start, fund, and scale startups? In this special episode, Greta Koch (European Parliament) and Iwona Anna Biernat (Project Europe) join host Kevin De Pauw to unpack the vision behind EU-INC! The  “28th regime” designed to unify European company law and accelerate innovation.

Episode 10

EP

10

-

Season 1

The Dark Side of AI Supply Chains: Inside HuggingFace Risks & Real Exploits

What if the models fueling innovation are already compromised? Hyrum Anderson (Cisco) reveals how AI supply chain vulnerabilities, model trojans, and open-source AI exploits are reshaping cybersecurity. Learn about behavior inheritance, model provenance, and how to secure foundation models and LLMOps pipelines. A critical guide for AI security teams, CISOs, and developers ensuring trust in AI from model to market.

Episode 9

EP

9

-

Season 1

The Silent Killer of AI Projects: MLOps Done Wrong

Behind the AI hype lies a scaling crisis. We reveal how MLOps and LLMOps frameworks bridge the gap between AI prototypes and enterprise deployment. Learn the cultural, compliance, and architectural shifts that make teams production-ready. Perfect for AI leaders, DevOps engineers, and risk officers driving responsible AI implementation and model governance in 2025.

Episode 8

EP

8

-

Season 1

Why Most AI Projects Crash and Burn, and How MLOps/LLMOps Fix It!

Most AI projects die before reaching production. In this episode, we dissect why—with lessons on MLOps, LLMOps, and AI governance that turn prototypes into production-ready solutions. Discover how to align data science and engineering, prevent model drift, and ensure AI compliance. Packed with real-world insights for CTOs, data scientists, and ML engineers ready to scale GenAI systems safely.

Episode 7

EP

7

-

Season 1

AI That Thinks Ethically: Can We Build Agents That Actually Learn?

Are we building AI that helps people—or just imitates them? Geertrui Mieke De Ketelaere (Vlerick, ex-Imec) joins to explore ethical AI, human-centered design, and AI governance. We discuss responsible AI companions, sustainable AI deployment, and how empathy and transparency build digital trust. For leaders, educators, and developers shaping a more ethical and sustainable AI future.

Episode 6

EP

6

-

Season 1

What If AI Could Think Together? Inside the Internet for AI Agents (ACP)

Can AI agents collaborate across ecosystems? IBM Research experts Sandi Besen and Aakriti Aggarwal unpack the Agent Communication Protocol (ACP)—the “HTTP for AI agents.” Explore multi-agent systems, BeeAI framework, agent lifecycle management, and compliance by design in distributed AI. From ESG reporting to travel automation, discover how ACP and open-source standards are shaping the next wave of AI interoperability and digital trust.

Episode 5

EP

5

-

Season 1

Are Humans Getting Dumber as AI Gets Smarter? (MCP, A2A Explained)

AI-to-AI communication is here—and it’s changing everything. In this follow-up, Vineeth Sai Narajala explores the rise of MCP, A2A, and the Agent Name Service (ANS) powering the Internet of Agents. Learn how to prevent rug-pull attacks, tool poisoning, and token abuse, while ensuring governance, auditability, and human-in-the-loop safety. Perfect for anyone building secure, autonomous, and transparent AI systems.