tensorflow2015白皮书

上传人:luoxia****01802 文档编号:69738433 上传时间:2019-01-14 格式:PDF 页数:19 大小:864.24KB
返回 下载 相关 举报
tensorflow2015白皮书_第1页
第1页 / 共19页
tensorflow2015白皮书_第2页
第2页 / 共19页
tensorflow2015白皮书_第3页
第3页 / 共19页
tensorflow2015白皮书_第4页
第4页 / 共19页
tensorflow2015白皮书_第5页
第5页 / 共19页
点击查看更多>>
资源描述

《tensorflow2015白皮书》由会员分享,可在线阅读,更多相关《tensorflow2015白皮书(19页珍藏版)》请在金锄头文库上搜索。

1、TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (Preliminary White Paper, November 9, 2015) Mart n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, A

2、ndrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, V

3、ijay Vasudevan, Fernanda Vi egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng Google Research Abstract TensorFlow 1 is an interface for expressing machine learn- ing algorithms, and an implementation for executing such al- gorithms. A computation express

4、ed using TensorFlow can be executed with little or no change on a wide variety of hetero- geneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is fl e

5、xible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learn- ing systems into production across more than a dozen areas of computer science and

6、other fi elds, including speech recogni- tion, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the Ten- sorFlow interface and an implementation of that interface that we have buil

7、t at Google. The TensorFlow API and a reference implementationwerereleasedasanopen-sourcepackageunder the Apache 2.0 license in November, 2015 and are available at www.tensorfl ow.org. 1Introduction The Google Brain project started in 2011 to explore the use of very-large-scale deep neural networks,

8、 both for research and for use in Googles products. As part of the early work in this project, we built DistBelief, our fi rst-generation scalable distributed training and infer- ence system 14, and this system has served us well. We andothersatGooglehaveperformedawidevarietyofre- search using DistB

9、elief including work on unsupervised learning 31, language representation 35, 52, models for image classifi cation and object detection 16, 48, video classifi cation 27, speech recognition 56, 21, 20, Corresponding authors: Jeffrey Dean and Rajat Monga: jeff, sequence prediction 47, move selection f

10、or Go 34, pedestrian detection 2, reinforcement learning 38, and other areas 17, 5. In addition, often in close collab- oration with the Google Brain team, more than 50 teams at Google and other Alphabet companies have deployed deep neural networks using DistBelief in a wide variety of products, inc

11、luding Google Search 11, our advertis- ing products, our speech recognition systems 50, 6, 46, Google Photos 43, Google Maps and StreetView 19, Google Translate 18, YouTube, and many others. Based on our experience with DistBelief and a more complete understanding of the desirable system proper- tie

12、s and requirements for training and using neural net- works, we have built TensorFlow, our second-generation system for the implementation and deployment of large- scale machine learning models. TensorFlow takes com- putations described using a datafl ow-like model and maps them onto a wide variety

13、of different hardware platforms, ranging from running inference on mobile device platforms such as Android and iOS to modest- sized training and inference systems using single ma- chines containing one or many GPU cards to large-scale training systems running on hundreds of specialized ma- chines wi

14、th thousands of GPUs. Having a single system that can span such a broad range of platforms signifi - cantly simplifi es the real-world use of machine learning system, as we have found that having separate systems for large-scale training and small-scale deployment leads to signifi cant maintenance b

15、urdens and leaky abstrac- tions. TensorFlow computations are expressed as stateful datafl ow graphs (described in more detail in Section 2), and we have focused on making the system both fl exible enough for quickly experimenting with new models for research purposes and suffi ciently high performan

16、ce and robust for production training and deployment of ma- chine learning models. For scaling neural network train- ing to larger deployments, TensorFlow allows clients to easily express various kinds of parallelism through repli- cation and parallel execution of a core model datafl ow 1 graph, withmanydifferentcomputationaldevicesallcol- laborating to update a set of shared parameters or other state.Modest changes in the descripti

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 外语文库 > 英语读物

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号