Your AI, Your Way
In this podcast, we examine AI infrastructure from an enterprise perspective. Guests with backgrounds in enterprise IT, cloud architecture, security, finance, and education join MDCS.ai in the Cisco podcast studio to share practical experience and informed viewpoints.
Each episode addresses the questions that arise once AI initiatives move beyond experimentation and into production.
How do you design infrastructure that truly scales?
What happens to cost, performance, and control as AI workloads grow?
How do organizations balance speed, security, data sovereignty, and long-term ownership?
Rather than focusing on trends or product promotion, the discussions are grounded in real-world challenges—covering architectural choices, operating models, governance, accountability, and the trade-offs organizations must navigate when building or scaling AI environments.
Your AI, Your Way is intended for AI leaders and practitioners responsible for delivering AI in practice, not just in theory.
Your AI, Your Way
End-to-End Security
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
You cannot secure what you cannot see.
When cloud adoption started, employees picked their own tools without IT knowing. They called it Shadow IT. The same pattern is now repeating with AI.
Developers pull models from Hugging Face because it is convenient. Over one hundred thousand models live there. Most have never been security checked.
A well-known vendor recently published a PyTorch container with nearly one hundred documented vulnerabilities. Some can be patched. Some cannot.
For AI, there is no Patch Tuesday yet.
The risk goes beyond infrastructure. A model that answers questions can also leak data if you phrase the prompt differently. Securing containers is one discipline. Understanding what a model actually does is another.
In this 35-minute discussion recorded at the Cisco Studio in Amsterdam, Michel Cosman (MDCS.AI) and Jan Heijdra (Cisco) examine what end-to-end security means when AI workloads enter production.
Key topics include:
- Why "Shadow AI" is becoming the new Shadow IT, and how organizations regain visibility.
- The difference between securing infrastructure and securing model behavior.
- How attackers fire 50,000 prompts at a model to find vulnerabilities, and how defenders can do the same.
- What the EU AI Act demands in terms of auditability, and why it is no longer optional.
- Why AI security needs to be a boardroom conversation, not an IT project.