program-transformations 2019 : NeurIPS 2019 Workshop on Program Transformations for Machine Learning
Call For Papers
CALL FOR PAPERS
Program Transformations for Machine Learning
Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS)
December 13 or 14, 2019
Vancouver Convention Centre, Vancouver, BC, Canada
Machine learning researchers often express complex models as a program, relying on program transformations to add functionality. New languages and transformations (e.g., TorchScript and TensorFlow AutoGraph) are becoming core capabilities of ML libraries. However, existing transformations, such as automatic differentiation (AD or autodiff), inference in probabilistic programming languages (PPLs), and optimizing compilers are often built in isolation, and limited in scope. This workshop aims at viewing program transformations in ML in a unified light, making these capabilities more accessible, and building entirely new ones.
Program transformations are an area of active study. AD transforms a program performing numerical computation into one computing the gradient of those computations. In probabilistic programming, a program describing a sampling procedure can be modified to perform inference on model parameters given observations. Other examples are vectorizing a program expressed on one data point, and learned transformations where ML models use programs as inputs or outputs.
This workshop will bring together researchers in the fields of AD, probabilistic programming, programming languages, compilers, and ML, with the goal of understanding the commonalities between disparate approaches and views, and sharing ways to make these techniques broadly available. It would enable ML practitioners to iterate faster on novel models and architectures (e.g., those naturally expressed through high-level constructs like recursion).
* Abstractions and syntax (beyond meta-programming and operator overloading) to naturally express a program (expression, or procedure) as an object to be manipulated
* Techniques from AD and PPL the ML community could adopt to enable research on new models
* How to overcome challenges due to the ML’s specific hardware (GPUs, specialized chips) and software (Python) stacks, and the particular demands of practitioners for their tools
* Greater collaboration between ML and programming languages communities
We are soliciting contributions bridging the gap between the AD, (P)PL, ML and/or compiler/systems communities. Submissions should be 2 to 4 pages extended abstracts. They do not need to be anonymized. Submissions are non-archival. Work can include:
* recent work on these topics which was published in non-ML venues;
* preliminary or novel work demonstrating applications of program transformation techniques to ML (but not finalized work already published);
* a summary of multiple previous contributions on program transformation techniques with potential applications for ML.
Please submit your abstracts at openreview.net/group?id=NeurIPS.cc/2019/Workshop/Program_Transformations
Up to 6 submissions will be selected to give a contributed talk. The talks will be selected based on the quality of the submission, and with the aim of spanning the different research disciplines that this workshop aims to engage. Remaining submission will be considered for a poster session. Submissions will be reviewed by at least two, ideally three, people from the organizing committee, and will not be reviewed by people with a conflict of interest (i.e., a shared affiliation within the past 3 years).
* Monday 12 August, 2019: Submissions open
* Monday 16 September, 2019: Submissions deadline
* Wednesday 18 September, 2019: Reviewing period starts
* Wednesday 25 September, 2019: Reviews due
* Friday 27 September, 2019: Decision deadline
* Tuesday October 1, 2019: NeurIPS notification deadline
* Jan-Willem van de Meent (Northeastern University)
* Soumith Chintala (Facebook AI Research)
* Christine Tasson (Université de Paris)
* Skye Wanderman-Milne (Google Brain)
* Alex Wiltschko (Google Brain)
* Atılım Güneş Baydin (University of Oxford)
* Bart van Merriënboer (Google Brain)
* Pascal Lamblin (Google Brain)
* Emily Fertig (Google Research)
* Barak Pearlmutter (Maynooth University)
* Laurent Hascoët (INRIA)
* David Duvenaud (University of Toronto)