Google announces the Cloud TPU v5p, its most powerful AI accelerator yet

Google announces the Cloud TPU v5p, its most powerful AI accelerator yet

11 months ago
Anonymous $Pi6HN8Q0B-

https://techcrunch.com/2023/12/06/google-announces-the-cloud-tpu-v5p-its-most-powerful-ai-accelerator-yet/

Google today announced the launch of its new Gemini large language model (LLM) and with that, the company also launched its new Cloud TPU v5p, an updated version of its Cloud TPU v5e, which launched into general availability earlier this year. A v5p pod consists of a total of 8,960 chips and is backed by Google’s fastest interconnect yet, with up to 4,800 Gpbs per chip. Google trained Gemini on these new custom chips.

It’s no surprise that Google promises that these chips are significantly faster than the v4 TPUs. The team claims that the v5p features a 2x improvement in FLOPS and 3x improvement in high-bandwidth memory. That’s a bit like comparing the new Gemini model to the older OpenAI GPT 3.5 model, though. Google itself, after all, already moved the state of the art beyond the TPU v4. In many ways, though, v5e pods were a bit of a downgrade from the v4 pod, with only 256 v5e chips per pod vs 4096 in the v4 pods and a total of 197 TFLOPs 16-bit floating point performance per v5e chip vs 275 for the v4 chips. For the new v5p, Google promises up to 459 TFLOPs of 16-bit floating point performance, backed by the faster interconnect.

Google announces the Cloud TPU v5p, its most powerful AI accelerator yet

Wed Dec 6, 3:17pm UTC
https://techcrunch.com/2023/12/06/google-announces-the-cloud-tpu-v5p-its-most-powerful-ai-accelerator-yet/ > Google today announced the launch of its new Gemini large language model (LLM) and with that, the company also launched its new Cloud TPU v5p, an updated version of its Cloud TPU v5e, which launched into general availability earlier this year. A v5p pod consists of a total of 8,960 chips and is backed by Google’s fastest interconnect yet, with up to 4,800 Gpbs per chip. Google trained Gemini on these new custom chips. > It’s no surprise that Google promises that these chips are significantly faster than the v4 TPUs. The team claims that the v5p features a 2x improvement in FLOPS and 3x improvement in high-bandwidth memory. That’s a bit like comparing the new Gemini model to the older OpenAI GPT 3.5 model, though. Google itself, after all, already moved the state of the art beyond the TPU v4. In many ways, though, v5e pods were a bit of a downgrade from the v4 pod, with only 256 v5e chips per pod vs 4096 in the v4 pods and a total of 197 TFLOPs 16-bit floating point performance per v5e chip vs 275 for the v4 chips. For the new v5p, Google promises up to 459 TFLOPs of 16-bit floating point performance, backed by the faster interconnect.