This past Tuesday, Google and Facebook announced a partnership to enable the open-sourced machine learning framework PyTorch to work with Tensor-Processing Units (TPUs). This partnership could signal a new age of collaboration towards AI research.
“Today, we’re pleased to announce that engineers on Google’s TPU team are actively collaborating with core PyTorch developers to connect PyTorch to Cloud TPUs. The long-term goal is to enable everyone to enjoy the simplicity and flexibility of PyTorch while benefiting from the performance, scalability, and cost-efficiency of Cloud TPUs.” – Director of Product, Rajen Sheth
PyTorch is Facebook’s open-source framework which enables development of mathematical programs like those used in Artificial Intelligence research. Such frameworks allow researchers to develop arbitrarily complicated mathematical computational graphs and automatically calculate derivatives.
TPUs are computer chips designed by Google specifically for AI systems. According to Google, TPUs are 15x to 30x faster than conventional Graphical Processing Units (GPUs).
Why TPUs on Pytorch Matter
“The long-term goal is to enable everyone to enjoy the simplicity and flexibility of PyTorch while benefiting from the performance, scalability, and cost-efficiency of Cloud TPUs,” he added.
PyTorch 1.0 accelerates the workflow involved in taking breakthrough research in AI to production deployment, Facebook said.