The Google Cloud TPU delivers up to 180 teraflops of computing power.

At the Google I/O conference Wednesday, Google unveiled the next generation of its custom-built Tensor Processing Unit (TPU), a chip designed for machine learning. The first generation of TPUs, revealed at last year’s I/O, were designed to run already-trained machine learning models. This new chip, delivering an impressive 180 teraflops of computing power, both trains and runs such models.

Called the Cloud TPU, the second-generation chip will be available to anyone through the Google Cloud Platform (GCP). Developers on GCP are still free to design with traditional like Intel’s Skylake or GPUs like Nvidia’s Volta.

The Cloud TPU is the latest example of how Google’s using its cutting-edge technology to set itself apart from its public cloud competitors.

“We want Google Cloud to be the best cloud for machine learning,” Google CEO Sundar Pichai said in the I/O keynote address. “This lays the foundation for significant progress.”

cloud-tpus.png

special feature


How to Implement AI and Machine Learning

The next wave of IT innovation will be powered by artificial intelligence and machine learning. We look at the ways companies can take advantage of it and how to get started.

To ramp up computing power even further, Google has built a custom, ultra-high speed network to connect 64 TPUs into a machine-learning supercomputer. Called a TPU pod, it can deliver up 11.5 petaflops (or 11,500 teraflops) of compute to train either a single large machine learning model or many smaller ones.

To illustrate the power of a TPU pod, Google said that to train its new, large-scale language translation model, it would take a full day using 32 of the world’s best commercially available GPUs. By contrast, one-eighth of a TPU pod can train the model in just six hours.

Both the individual Cloud TPUs and the full TPU pods were designed to be programmed using high-level abstractions expressed using the Tensorflow machine learning system — the open sourced system developed by Google.

The first-generation TPUs were deployed internally at Google two years ago and are used on Google producuts such as Search, the new neural machine translation system for Google Translate, Google speech recognition and Google Photos.

Jeff Dean, a senior fellow for Google Brain, told reporters this week that Google is still using CPUs and GPUs to train some machine learning models. However, he expects that over time, Google will increasingly use TPUs.

Meanwhile, Google on Wednesday also announced the TensorFlow Research Cloud, a cluster of 1,000 Cloud TPUs that Google will offer for free to researchers on certain conditions. The researchers must be willing to openly publish the results of their research and potentially open source the code associated with their research.

Google’s making the Research Cloud available to accelerate the pace of machine learning research and plans to share it with entities like Harvard Medical School.

For those involved in proprietary research, Google plans to launch the Cloud TPU Alpha program.



Source link