
Cisco Systems unveiled its Silicon One G300 switching chip on February 10, 2026, at Cisco Live EMEA in Amsterdam, marking a direct challenge to networking dominance held by Nvidia and Broadcom in the rapidly expanding AI infrastructure market. The 102.4 terabits per second chip addresses AI's critical networking bottleneck as enterprises shift focus from rapid deployment to energy efficiency and total cost of ownership.
The G300 introduces what Cisco calls "Intelligent Collective Networking," combining an industry-leading 252 megabyte fully shared packet buffer, path-based load balancing, and proactive network telemetry. These features deliver a 33 percent increase in network utilization and a 28 percent reduction in job completion time compared to non-optimized implementations, directly translating to more AI tokens generated per GPU-hour and improved data center profitability.
The chip powers new Cisco N9000 and Cisco 8000 switching systems designed for hyperscalers, neoclouds, sovereign clouds, service providers, and enterprises building AI clusters. The systems are available in both air-cooled and 100 percent liquid-cooled configurations, with liquid cooling enabling a nearly 70 percent energy efficiency improvement. A single liquid-cooled system delivers bandwidth that previously required six prior-generation systems, dramatically reducing physical footprint and operational costs.
Cisco President and Chief Product Officer Jeetu Patel emphasized the strategic importance of networking in AI infrastructure buildouts. "We are spearheading performance, manageability, and security in AI networking by innovating across the full stack—from silicon to systems and software," Patel stated. "We're building the foundation for the future of infrastructure, supporting every type of customer as they shift to AI-powered workloads."
The G300 addresses fundamental challenges as AI workloads evolve beyond predictable training tasks to unpredictable traffic patterns from agentic AI collaboration and on-demand inference. Networks must now handle the collision of synchronous high-bandwidth GPU communication with variable loads, while AI clusters scale to gigawatts of power spanning buildings and regions. These dynamics introduce latency and cost challenges that directly impact capital expenses and operating profit.
Martin Lund, Executive Vice President of Cisco's Common Hardware Group, highlighted that AI success depends on more than GPU performance alone. "It's not just about faster GPUs—the network must deliver scalable bandwidth and reliable, congestion-free data movement," Lund explained. "Silicon One G300 delivers high-performance, programmable, and deterministic networking, enabling every customer to fully utilize their compute and scale AI securely and reliably in production."
The chip's programmability enables equipment upgrades for new network functionality even after deployment, protecting long-term infrastructure investments as AI use cases evolve. Security fused directly into hardware allows holistic protection at network speeds, keeping clusters operational without performance degradation from security overhead.
Cisco stock surged 1.5 percent to $88.05 following the announcement, with the timing strategically positioned ahead of the company's Q2 FY2026 earnings call scheduled for February 11 at 4:30 PM EST. Analysts expect 8 percent year-over-year revenue growth as Cisco captures share in the projected $600 billion AI infrastructure market. The Silicon One G300 is expected to reach commercial availability in the second half of 2026.




