Shravan Narayanamurthy, Markus Weimer, Dhruv Mahajan, Tyson Condie, Sundararajan Sellamanickam, Keerthi Selvaraj

Abstract

In this article, we argue that resource elasticity is a key requirement for distributed machine learning. Not only do computational resources disappear without warning (e.g. due to machine failure), modern resource managers also re-negotiate the available resources while a job is running: Additional machines may have become available or already reserved ones have been re-assigned to other jobs. We show how to formalize this problem and present an initial approach for linear learners.

Download PDF

BibTeX

@Miscellaneous {narayanamurthy2013,
author       = {Shravan Narayanamurthy and Markus Weimer and Dhruv Mahajan and Tyson Condie and
                Sundararajan Sellamanickam and S. Sathiya Keerthi},
publisher    = {NIPS 2013 BigLearn Workshop},
title        = {Towards Resource-Elastic Machine Learning},
url          = {http://research.microsoft.com/apps/pubs/default.aspx?id=217296},
year         = {2013}, }