product
1623160Distributed Machine Learning and Gradient Optimizationhttps://www.gandhi.com.mx/distributed-machine-learning-and-gradient-optimization-1/phttps://gandhi.vtexassets.com/arquivos/ids/336244/292664f7-a49f-4f2e-8289-b8586319d886.jpg?v=63833429541880000027323036MXNSpringer Nature SingaporeInStock/Ebooks/<p>This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.</p><p>Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management.</p>...1602301Distributed Machine Learning and Gradient Optimization27323036https://www.gandhi.com.mx/distributed-machine-learning-and-gradient-optimization-1/phttps://gandhi.vtexassets.com/arquivos/ids/336244/292664f7-a49f-4f2e-8289-b8586319d886.jpg?v=638334295418800000InStockMXN99999DIEbook20229789811634208_W3siaWQiOiJmZjAzZmI0MC1iMjE3LTQxYjYtYjcyNi1mNTc0Yjc1MWEyNTIiLCJsaXN0UHJpY2UiOjI5OTQsImRpc2NvdW50IjoyOTksInNlbGxpbmdQcmljZSI6MjY5NSwiaW5jbHVkZXNUYXgiOnRydWUsInByaWNlVHlwZSI6Ildob2xlc2FsZSIsImN1cnJlbmN5IjoiTVhOIiwiZnJvbSI6IjIwMjUtMDYtMTBUMTU6MDA6MDBaIiwidG8iOiIyMDI1LTA2LTMwVDIzOjU5OjU5WiIsInJlZ2lvbiI6Ik1YIiwiaXNQcmVvcmRlciI6ZmFsc2V9LHsiaWQiOiJiNjgyZDkyYy1iZGJkLTRlOTMtOTRmNy02YzE1NWIxNzJkOTUiLCJsaXN0UHJpY2UiOjMwMzYsImRpc2NvdW50IjozMDQsInNlbGxpbmdQcmljZSI6MjczMiwiaW5jbHVkZXNUYXgiOnRydWUsInByaWNlVHlwZSI6Ildob2xlc2FsZSIsImN1cnJlbmN5IjoiTVhOIiwiZnJvbSI6IjIwMjUtMDctMDFUMDA6MDA6MDBaIiwicmVnaW9uIjoiTVgiLCJpc1ByZW9yZGVyIjpmYWxzZX1d9789811634208_<p>This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.</p><p>Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management.</p>(*_*)9789811634208_<p>This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.</p><p>Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management.</p>...9789811634208_Springer Nature Singaporelibro_electonico_08e71627-9764-379a-a4f6-cff64dfdba1c_9789811634208;9789811634208_9789811634208Ce ZhangInglésMéxico2022-02-23T00:00:00+00:00Springer Nature Singapore