bokomslag Distributed Machine Learning and Gradient Optimization
Data & IT

Distributed Machine Learning and Gradient Optimization

Jiawei Jiang Bin Cui Ce Zhang

Pocket

2249:-

Funktionen begränsas av dina webbläsarinställningar (t.ex. privat läge).

Uppskattad leveranstid 10-16 arbetsdagar

Fri frakt för medlemmar vid köp för minst 249:-

Andra format:

  • 169 sidor
  • 2023
This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol. Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management.
  • Författare: Jiawei Jiang, Bin Cui, Ce Zhang
  • Format: Pocket/Paperback
  • ISBN: 9789811634222
  • Språk: Engelska
  • Antal sidor: 169
  • Utgivningsdatum: 2023-02-25
  • Förlag: Springer Verlag, Singapore