This weekend, I had a fascinating discussion that reinforced a critical but often overlooked part of Agentic AI designโthe strategic choice between deterministic and probabilistic planning.
When building enterprise-grade agent systems, the question isnโt just how agents make decisions, but how they planโhow they break down tasks, execute workflows, and maintain reliability.
๐ Two Approaches to Agent Planning
๐ง Probabilistic Planning (Chasing the Squirrels)
๐น Uses LLMs to dynamically create execution plans
๐น Flexible, context-aware, and adaptive
๐น BUT: Plans can change unpredictably, leading to bias, inconsistency, and lack of repeatability
๐น Works well in creative, exploratory, or loosely defined use cases
๐ฏ Deterministic Planning (Following the Path)
๐น Uses predefined rules and structured workflows (Parcha called this โAgents on Railsโโa term I like!)
๐น Provides reliability, predictability, and repeatability
๐น BUT: Can struggle with novel situations and dynamic flexibility
๐น Ideal for regulated, high-assurance workflows like compliance, risk, and finance
Parcha did a great write-up on why purely probabilistic planning failed for them (link in comments), but there are plenty of valid use cases where it works. The real answer? It depends on the architecture.
๐ The Architecture is the Solution
Itโs NOT about choosing one over the other. Itโs about knowing when and how to use them together.
The best enterprise-grade agent systems donโt eliminate LLMsโthey integrate them as "glue" between deterministic components.
Thatโs exactly how we use AI/LLM-generated Data Object Graphs (DOGs)โas a dynamic execution layer that balances structure with flexibility.
๐ Three Key Considerations for AI Agent Planning
1๏ธโฃ Decomposition Over Delegation
๐ Instead of handing over entire workflows to a large LLM, break complex tasks into structured, modular components. This keeps things controllable while still leveraging AI for dynamic execution.
2๏ธโฃ Match the Tool to the Risk Tolerance
๐ Use deterministic methods for high-assurance tasks (compliance, risk assessment, mission-critical operations).
๐ Use probabilistic methods where contextual adaptability is more valuable than rigid consistency (creative workflows, conversational AI).
3๏ธโฃ Consider the Benchmark
๐ We arenโt comparing AI agents to perfect deterministic systemsโweโre comparing them to human processes, which are inherently probabilistic and inconsistent.
๐ Sometimes, AI only needs to be "good enough"โbut knowing when "good enough" isn't enough can prevent costly failures down the road.
๐ค Blending the Two: A Real-World Example
A colleague shared a hybrid approach they implemented:
๐ก Scenario: An AI-powered chatbot for complex product recommendations
โ
Deterministic: The overall user journey follows a structured, well-defined flow
โ
Probabilistic: The questions adapt based on the userโs industry and role, dynamically adjusting responses at each step
This blend creates an AI system that is structured yet flexible, adaptable yet reliableโa perfect example of the "bounded creativity" approach.
๐ The Future of Enterprise Agentic AI
The real key to scaling AI Agents isnโt just better modelsโitโs better architecture.
- Deterministic components provide structure and guardrails
- Probabilistic components provide adaptability and contextual understanding
- Data Object Graphs (DOGs) act as the intelligent execution layer that ties them together
We wrote about Agentic Query Plans in a previous blog post, which provides another perspective on this hybrid approach.
Would love to hear your thoughts! Please reach out to us to learn about how we can help you with deterministic and probabilistic approaches coexisting in AI-driven workflows.
With Dataception's DOGs, AI is just a walk in the park. ๐๐