Tensor Processing Units (TPUs) as scientific supercomputers

Evolving Networks in Matter and Mind
October 24, 2022

Monday, October 24, 2022 at 4:15 p.m.
Location: Physics North Lecture Hall #1
Speaker: Guifre Vidal, Research Scientist at Google Quantum AI

Abstract: Google's TPUs were exclusively designed to accelerate and scale up machine learning workloads, amid the ongoing planet-wide race to build faster specialized hardware for artificial intelligence. But one must surely be able to use this hardware for other challenging computational tasks, right? We explored how to turn a TPU pod (2048 TPU v3 cores) into a dense linear algebra supercomputer to e.g. multiply two matrices of size 1,000,000 x 1,000,000 in just 2 minutes. We then used this power to perform a number of quantum physics and quantum chemistry computations at scale. For instance, we recently completed two largest-ever computations: a Density Functional Theory DFT computation of electronic structure (with N = 248,000 orbitals), and a Density Matrix Renormalization Group DMRG computation (with bond dimension D = 65,000). Cloud-based TPU pods and GPU pods are accessible to anyone and are poised to revolutionize the scientific supercomputing landscape.

References:

Simulation of quantum many-body dynamics with Tensor Processing Units: Floquet prethermalization, arXiv:2111.08044

Simulation of quantum physics with Tensor Processing Units, arXiv:2111.10466

Large Scale Distributed Linear Algebra With Tensor Processing Units, arXiv:2112.09017

Tensor Processing Units as Quantum Chemistry Supercomputers, arXiv:2202.01255

Dynamics of Transmon Ionization,  arXiv:2203.11235

Density Matrix Renormalization Group with Tensor Processing Units, arXiv:2204.05693

Research Area: Condensed Matter