Webinar Recap: From Automation to Autonomy — Achieving Transformational Business Value with Agentic AI
Summary
Aera Technology’s Future.Now webinar, “From Automation to Autonomy: Achieving Transformational Business Value with Agentic AI,” explored how leading enterprises are moving beyond routine automation to intent-driven, self-optimizing operations powered by Aera, the decision intelligence agent. Speakers from Accenture and Aera detailed why autonomy — rather than incremental automation — is now the fastest route to resilience, agility, and competitive advantage, especially in disruption-prone supply chains.
Over the course of the session, they unpacked fresh findings from Accenture’s 2024 Autonomous Supply-Chain Global Survey, shared real-world examples in demand planning, inventory optimization, and promotion execution, and highlighted the architectural ingredients — Decision Intelligence Networks and composable agent teams — that allow autonomy to scale safely across complex global enterprises.
Key Takeaways
- Intent-driven operations now outperform function-driven processes. Autonomous agents translate high-level business goals (e.g., “minimize stock-outs during a promotion”) into coordinated, cross-functional actions, eliminating the silos and manual hand-offs that slow traditional process flows. This shift frees human experts to focus on strategy rather than transactional orchestration.
- Fresh survey data underscores why autonomy can’t wait. Accenture’s 2024 Autonomous Supply-Chain Global Survey reveals that more than 70 percent of supply-chain leaders deem autonomy “mission-critical” within three years, citing faster disruption recovery, greater forecast accuracy, and improved customer satisfaction as top drivers.
- Early adopters are already posting eye-catching performance gains. Companies piloting agentic AI report a 60 percent reduction in time-to-recover from disruptions, a 22 percent drop in inventory without service loss, and 26 percent shorter product-development cycles — benefits that cascade across the P&L.
- A modular, agent-centric architecture turns vision into reality. Decision Intelligence Networks supply the trusted data fabric and governance layer, while small, composable teams of agents tackle discrete decision spaces (such as safety-stock setting or supplier reallocation) and learn from each other in real time, ensuring low risk and rapid time-to-value.
- A staged, low-risk roadmap accelerates the ascent to full autonomy. Organizations typically progress from decision support to supervised automation and finally to full autonomy, where the agent senses risk, decides, and acts within policy limits — delivering incremental ROI and building the trust needed for each successive leap.
Speakers
![]() |
Diego Pantoja-Navajas, Managing Director, Enterprise AI Value Strategy, Accenture Diego leads Accenture’s global strategy for bringing agentic AI and decision intelligence into enterprise supply chains. Prior to Accenture, he founded cloud-native WMS pioneer LogFire (acquired by Oracle) and served as VP of AWS Supply Chain, where he helped shape SaaS execution and insight-driven operations. |
![]() |
Gonzalo Benedit, Chief Revenue Officer, Aera Technology With two decades of experience guiding enterprise-software transformations, Gonzalo leads Aera’s sales organization. Prior to Aera, he served as President of Workday International (EMEA & Asia) and held senior SAP posts, including COO for SAP EMEA and Managing Director for SAP Mexico. |
![]() |
Mustafa Kabul, SVP, Data Science & AI at Aera Technology Mustafa drives the data-science and AI strategy behind Aera, blending optimization, machine learning, and generative AI to power autonomous decision-making. He earned a PhD in Operations Research from UNC-Chapel Hill, where his research focused on game-theoretic supply-chain models. |
Full Recording
Q&A
Q: How do we decide when to use LLM agents or ML automation — is there a simple framework?
Mustafa Kabul: Not a very simple framework. One danger in addressing these complex decision situations is to oversimplify the problems. The short answer is: it depends. It depends on the use case, the complexity of the use case, the complexity of the data, and the complexity of how we want to approach the problem. Is this a real-time scenario? Is this more of a decoupled, distributed batch scenario? A couple of principles: large-language-model agents are great at providing reasoning capabilities through a generative approach, and we can utilize large language models as general problem-solvers. When we equip them with advanced decision engines that can, at runtime, figure out which engine to use — depending on the data and the character, the prompt of the agent — we can address a variety of different complex scenarios. Traditional machine-learning models, on the other hand, are trained specifically for a given data task — for example, a model that categorizes demand risk into three categories. You first have to acquire and build the training data for that specific task, train the model for that task and those categories, and then deploy it for runtime use. It isn’t generalizable; as your data changes, you have to retrain it. The power of large language models is that they are very expressive, and their parameter space is very big. They can be adjusted for generic tasks by providing, even at inference time, examples to do some inference-time training — adapting them to very different types of tasks.
Diego Pantoja-Navajas: And, Mustafa, if I can help: it’s more of an “and” than an “or.” Many processes still need statistical ML models. AI agents advance them further because NLP captures business nuances and relationships across the supply chain. In some cases, the ML models are helping us train the LLM models on the right tasks and goals so they continue to progress as they learn more about the business.
Mustafa Kabul: The good thing is, you have all that in Aera, the decision intelligence agent.
Diego Pantoja-Navajas: I love it — and with Aera we’ll move more companies to the right of the chart, growing the percentage that go from POC to fully autonomous supply chains. Congratulations — love it.
Q: Do we need any foundational pieces — like a data lake or planning system — working well before we leverage agentic AI?
Diego Pantoja-Navajas: That’s a great question. Whole-supply-chain data foundation is, from my point of view, number one. Of course, you need the right direction and company alignment, but investing in a modern data foundation is critical. Without curated data — one single version of truth — and a fully connected data structure that brings data from structured and unstructured sources, you can’t succeed. Once you think about the data foundation, you also need a semantic layer that translates tribal knowledge into a common vocabulary, and on top of that a knowledge layer that fuels our agents so they function optimally. Without that architecture, don’t waste time trying to implement an agentic solution — you’ll fail. I call it the data layer: sources, data products, semantic layer, knowledge layer. Make it number one on your to-do list as you move toward an autonomous supply chain.
Q: We’ve heard a lot about agents hallucinating. How can this be prevented in such complex reasoning?
Diego Pantoja-Navajas: Don’t try to boil the ocean. Build a north-star vision but hit minor milestones as you agentize workflows. Understand the data, reduce hallucinations, fail fast, go back, adjust, and build trust. Start small, have a solid data foundation, and be sure the agents have the right sources to execute successfully.
Mustafa Kabul: That’s fundamental. Implementing these powerful capabilities may require slower progression to understand every aspect. LLMs bring additional uncertainty. Your data foundation and deterministic agent functions must create a sandbox with validations and verifications. Work with a platform that sets the right data foundation and the right playground so LLM agents maximize their value.
Q: Don’t agents take time to execute? How will this scale?
Mustafa Kabul: The compute-intensive jobs don’t change. If you’re solving a large mixed-integer linear program in your agentic workflow, the core math still runs at the same speed. The same goes for large-scale forecasting across millions of grains with deep-learning models. LLMs add latency when processing outputs — interpreting and reasoning with inference — but inference speeds are improving dramatically. The flexibility and time efficiency gained — manual processes that took weeks become automated — cover the added latency very quickly. Yes, there’s some additional latency and maybe cost, but the benefits significantly outweigh that.
Diego Pantoja-Navajas: And the explainability the agents provide — the what and the why — adds huge value. Scaling with the right information and explainability is key, regardless of company size.