Your AI, Your Way
In this podcast, we examine AI infrastructure from an enterprise perspective. Guests with backgrounds in enterprise IT, cloud architecture, security, finance, and education join MDCS.ai in the Cisco podcast studio to share practical experience and informed viewpoints.
Each episode addresses the questions that arise once AI initiatives move beyond experimentation and into production.
How do you design infrastructure that truly scales?
What happens to cost, performance, and control as AI workloads grow?
How do organizations balance speed, security, data sovereignty, and long-term ownership?
Rather than focusing on trends or product promotion, the discussions are grounded in real-world challenges—covering architectural choices, operating models, governance, accountability, and the trade-offs organizations must navigate when building or scaling AI environments.
Your AI, Your Way is intended for AI leaders and practitioners responsible for delivering AI in practice, not just in theory.
Your AI, Your Way
Simplicity
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Initiating AI workloads in the cloud is straightforward. GPUs can be provisioned quickly, experiments launched immediately, and early results demonstrated to leadership—without capital expenditure or procurement delays.
The challenge emerges at scale.
As systems move into production, costs escalate. Finance questions why cloud spend doubled last quarter. Security teams seek clarity on where sensitive training data resides. Machine learning engineers face compute bottlenecks despite significant allocated capacity.
When failures occur, accountability becomes fragmented. With multiple vendors involved, resolution is slow and responsibility diffuse.
What once took hours to deploy can take weeks to stabilize.
In this 37-minute discussion recorded at Cisco Studio Amsterdam, Raymond Drielinger (MDCS.AI) and Jara Osterfeld (Cisco) examine what happens when AI workloads outgrow the cloud sandbox and enter enterprise reality.
Key topics include:
- Why GPUs remain underutilized in shared cloud environments while costs continue to accrue.
- How “noisy neighbor” effects degrade model performance—and why identical workloads often run faster on-premises.
- The difference between assembling hundreds of disconnected components and deploying an integrated, high-performance system engineered for immediate results.
- How a single point of accountability replaces multi-vendor finger-pointing.
A practical perspective on what it truly takes to scale AI beyond experimentation.