Tips for building an infrastructure that ensures agility from the edge to the heart of the network.
AI has advanced at a pace that many infrastructure professionals did not see coming. IDC predicts that spending on infrastructure-related Artificial Intelligence (AI) will reach $758 billion in 2029, a shift impacting more than just data centers. AI-related data flows will represent an increasing share of network traffic, not only in the east-west direction, between servers and the cloud or data center within the same local network, but in all directions.
Across industries, from manufacturing to financial services to retail, many companies are discovering promising new use cases that involve deploying new AI workloads at the edge. As they move from testing to production, these local AI workloads can be transferred to cloud GPU clusters for training and inference. This can result in unpredictable north-south data flows, between the internal network and external networks such as the Internet, across the company’s wide area network (WAN).
Traditional networks were designed with the assumption that connectivity would be predictable, both in terms of destination and volume. The new dynamics of AI can overwhelm existing enterprise architectures, making it more difficult to maintain efficient, budget-friendly deployments.
Many enterprise networks were not built for this and are showing signs of saturation. And if the network layer is not ready for AI, the strategy put in place to take advantage of this technological evolution will fail. Today’s network infrastructure must be high-performance, resilient and flexible to adapt to changing demands.
From capacity indicators to performance indicators
Traditional network planning focused on capacity: how much bandwidth, how many connections, what throughput levels. AI workloads require a different framework. Now the questions are about performance: Can the network handle traffic from edge to core at scale? Can it support massive east-west traffic patterns? Can it accommodate workloads that were not envisioned when the architecture was initially designed? Can it authenticate every user and an exponential number of devices and agents with zero trust security practices?
Executing AI workloads does not only depend on the computing power of GPUs or CPUs, but also relies on the ability to transfer data and adapt to market changes. This is why network performance is a critical factor in overall AI effectiveness, beyond just supporting infrastructure. They are a critical enabler of AI, much more than just a passive transport layer.
Why relying on a single supplier is a bad choice
When infrastructure needs evolve so quickly, flexibility becomes the most valuable competitive advantage. Proprietary ecosystems may deliver great performance today, but they also limit interoperability, slow integration, and reduce an organization’s ability to adapt to changing AI workloads. Vendor-agnostic architectures provide the resilience and flexibility needed, both now and in the future.
The strategic question for businesses is no longer simply which vendor offers the best value but which architecture provides the flexibility to connect, evolve and innovate as AI needs continue to evolve.
The repercussions on the company’s IT strategy
The organizations that will find success in the AI era are those that approach the network as an essential and flexible platform from the edge to the heart of the network, not as a fixed element. By favoring vendor-neutral design, they enable the integration of new technologies without the need for large-scale replacement. They will also evaluate network performance based on resiliency, visibility and control, rather than bandwidth alone.
Ultimately, they will understand that agility is the real asset. AI workloads will continue to evolve. Model architectures will change. Training methods will progress. Inference requirements will evolve. Security will evolve. The network deployed today must be capable of supporting workloads that have not yet been imagined.
This requires a fundamental shift in the way business networks are conceptualized. Not as static pipes optimized for predictable traffic, but as adaptive structures capable of responding to demands we have not yet fully imagined.
Today’s infrastructure choices will determine whether AI becomes a strategic advantage or an operational constraint; it is therefore appropriate to opt for architectures that offer sufficient room for maneuver to evolve, adapt, integrate security by design and innovate at the pace imposed by AI.




