Google has unveiled a new generation of artificial intelligence chips designed specifically for training and running AI models, marking its most direct challenge yet to Nvidia’s dominance in the sector, according to Capital Brief. The announcement, made at its 2026 Cloud Next event, introduces two separate processors tailored to different AI workloads.
The new chips called TPU 8t and TPU 8i, represent a shift from a single-chip approach to specialized hardware. The TPU 8t is built for compute-heavy training of large AI models, while the TPU 8i is optimized for inference, meaning the real-time operation of AI systems after they are deployed.
Performance improvements are a key selling point. Google says the training chip delivers significantly higher compute power, while the inference chip offers up to 80% better performance efficiency, addressing growing demand for faster, lower-cost AI services.
The move reflects a broader industry shift toward “agentic AI,” where systems don’t just respond to prompts but carry out complex, multi-step tasks, TechCrunch noted. This evolution is driving the need for dedicated inference hardware, which analysts say could become as important as training infrastructure in the coming years.
While Nvidia still dominates the global AI chip market with its GPUs, Google’s strategy highlights how major tech firms are increasingly building in-house silicon to reduce reliance on external suppliers and gain an edge in cloud-based AI services.

