posted by user: usenixcfp || 3411 views || tracked by 8 users: [display]

MAD 2012 : 2012 Workshop on Managing Systems Automatically and Dynamically


When Oct 7, 2012 - Oct 7, 2012
Where Hollywood, CA
Submission Deadline Jul 20, 2012
Notification Due Aug 10, 2012
Final Version Due Sep 12, 2012
Categories    USENIX   operating system design   networking   system management

Call For Papers

*** DEADLINE EXTENDED TO July 20, 2012, 11:59 p.m. PDT ***

MAD '12 will be co-located with the 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI '12), which will take place October 8–10, 2012.

Important Dates
Paper submissions due: July 6, 2012, 11:59 p.m. PDT
Notification to authors: August 10, 2012
Final papers due: September 12, 2012

Conference Organizers:
Program Co-Chairs
Peter Bodik, Microsoft Research
Greg Bronevetsky, Lawrence Livermore National Laboratory

The complexity of modern systems makes them extremely challenging to manage. From highly heterogeneous desktop environments to large-scale systems that consist of many thousands of software and hardware components, these systems exhibit a wide range of complex behaviors are difficult to predict. As such, although raw computational capability of these systems grows each year, much of it is lost to (i) complex failures that are difficult to localize and (ii) to poor performance and efficiency that results from system configuration that is inappropriate for the user’s workload. The MAD workshop focuses on techniques to make complex systems manageable, addressing the problem’s three major aspects:

System Monitoring
Systems report their state and behavior using a wide range of mechanisms. System and application logs include reports of key events that occur within software or hardware components. Performance counters measure various OS and hardware-level metrics (e.g. packets sent or cache misses) within a given time period. Further, information from source code version control systems or request traces can help identify the source of failures of poor performance.

Data Analysis
Data produced by monitoring can be analyzed using a variety of techniques to understand the system state and predict its behavior in various possible scenarios. Traditionally this consisted of system administrators manually inspecting system logs or using explicit pattern-matching rules to identify key events. Recent research has also focused on statistical and machine learning techniques to automatically identify behavioral patterns. Finally, the data can be presented directly to system administrators. Because of its large volume, such displays involve aggregation techniques that show the maximal information in minimal space.

Informed Action
The analyses and visualizations are used by operators to select the best action to improve productivity or localize and resolve system failures. The possible actions include restarting processes, rebooting servers, rolling back application updates or reconfiguring system components. Since the choice of the best action is complex, it requires assistance from additional analysis tools to predict the productivity of any given configuration on the given workload.

MAD seeks original early work on system management, including position papers and work-in-progress reports that will mature to be published at high-quality conferences. Papers are expected to demonstrate a strong foundation in the needs of the system management community and be positioned within the broader context of related work. In addition to technical merit, papers will be selected to encourage discussion at the workshop and among members of the general system management community.

Topics include but are not limited to:

Techniques to collect metric and log data, including tracing and statistical measurements
Large-scale aggregation of metric and log data
Reports on publicly available sources of sample logs of system metrics

Automated analysis of system logs and metrics using statistical, machine learning, natural language processing techniques
Visualization of system information in a way that leads administrators to actionable insights
Evaluation of the quality of learned models, including assessing the confidence/reliability of models and comparisons between different methods

Applications of log and metric analysis to address reliability, performance, power management, security, fault diagnosis, scheduling, or manageability
Challenges of scale in applying machine learning to large systems
Integration of machine learning into real-world systems and processes

Submission Guidelines
Submitted papers must be no longer than six (6) 8.5"x11" or A4 pages, using a 10-point font on 12 point (single-spaced) leading, with a maximum text block of 6.5 inches wide by 9 inches deep. The page limit includes everything except for references, for which there is no limit. The use of color is acceptable, but the paper should be easily readable if viewed or printed in gray scale. Authors must make a good faith effort to anonymize their submissions, and they should not identify themselves either explicitly or by implication (e.g., through the references or acknowledgments). Submissions violating the detailed formatting and anonymization rules on the Web site will not be considered for publication.

Authors who are not sure about anonymization or whether their paper fits into MAD should contact the MAD chairs. There will be no extensions for reformatting. Papers will be held in full confidence during the reviewing process, but papers accompanied by nondisclosure agreement forms are not acceptable and will be rejected without review. Authors of accepted papers will be expected to supply electronic versions of their papers and encouraged to supply source code and raw data to help others replicate and better understand their results.

Papers must be in PDF and must be submitted via the Web submission form, which will be available here soon.

Authors will be notified of acceptance by August 10, 2012. Authors of accepted papers will produce a final PDF by September 12, 2012. All papers will be available online to registered attendees before the workshop. If your accepted paper should not be published prior to the event, please notify The papers will be available online to everyone beginning on the day of the workshop.

Simultaneous submission of the same work to multiple venues, submission of previously published work, or plagiarism constitutes dishonesty or fraud. USENIX, like other scientific and technical conferences and journals, prohibits these practices and may take action against authors who have committed them. See the USENIX Conference Submissions Policy for details. Questions? Contact your program co-chairs,, or the USENIX office,

Related Resources

SIUSAI 2023   2023 International Symposium on Intelligent Unmanned Systems and Artificial Intelligence (SIUSAI 2023)
IOP, EI, Scopus-PRECE 2022   2022 International Conference on Power, Renewable Energy and Control Engineering (PRECE 2022)-EI Compendex
DSN 2023   IEEE/IFIP International Conference on Dependable Systems and Networks
IEEE-Ei/Scopus-CISDS 2022   2022 International Conference on Frontiers of Communications, Information System and Data Science (CISDS 2022)-EI Compendex
CAiSE 2023   35th International Conference on Advanced Information Systems Engineering
ICoIV 2022 - Ei Compendex & Scopus 2022   2022 International Conference on Intelligent Vehicles (ICoIV 2022)
ISMSI 2023   2023 7th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence (ISMSI 2023)
NLMLT 2022   International Conference on NLP and Machine Learning Trends
ICUASA 2023   2023 6th International Conference on Unmanned Aerial Systems and Aerospace(ICUASA 2023)
PODS 2023   Symposium on Principles of Database Systems