CSET 2012 : 5th Workshop on Cyber Security Experimentation and Test
Call For Papers
CSET '12 will be co-located with the 21st USENIX Security Symposium (USENIX Security '12), which will take place August 8–10, 2012.
Submissions due: Thursday, April 19, 2012, 11:59 p.m. PDT
Notification to authors: Thursday, May 31, 2012
Final paper files due: Thursday, June 28, 2012
Sean Peisert, University of California, Davis, and Lawrence Berkeley National Laboratory
Stephen Schwab, USC Information Sciences Institute (ISI)
Matt Bishop, University of California, Davis
Jack Brassil, HP Labs
Elie Bursztein, Stanford University
Ron Dodge, U.S. Military Academy
Deborah Frincke, U.S. National Security Agency
Brian Hay, University of Alaska, Fairbanks
Cynthia Irvine, Naval Postgraduate School
Chris Kanich, University of California, San Diego
Christian Kreibich, International Computer Science Institute
Patrick Lardieri, Lockheed Martin ATL
Damon McCoy, George Mason University
Ron Ostrenga, Johns Hopkins Applied Physics Laboratory
Phil Porras, SRI International
Jessica Staddon, Google, Inc.
Ed Talbot, Sandia National Laboratory
Robert Watson, University of Cambridge Computing Laboratory
Sean Whalen, Columbia University
Steering Committee Chair
Terry V. Benzel, USC Information Sciences Institute (ISI)
CSET invites submissions on the science of cyber security evaluation, as well as experimentation, measurement, metrics, data, and simulations as those subjects relate to computer and network security.
The science of cyber security is challenging for a number of reasons. For example, very little data is available for research use, and little is understood about what good data would look like if it were obtained. Experiments must recreate relevant, realistic features—including human behavior—in order to be meaningful, yet identifying those features and modeling them is hard. Repeatability and measurement accuracy are essential in any scientific experiment yet hard to achieve in practice. And cyber security experiments carry significant risk if not properly contained and controlled, yet often require some degree of interaction with the larger world in order to be useful.
Meeting these challenges requires transformational advances, including understanding of the relationship between scientific method and cyber security evaluation, advancing capabilities of underlying experimental infrastructure, and improving data usability.
Topics of interest include but are not limited to:
Science of cyber security, e.g., experiences with and discussions of experimental methodologies
Measurement and metrics, e.g., what are useful or valid metrics? how do we know? how does measurement interact with (or interfere with) evaluation?
Data sets, e.g., what makes good data sets? how do we know? how do we compare data sets? how do we collect new ones or generate derived ones? how do they hold up over time? how well do red teaming or capture-the-flag exercises generate data sets?
Simulations and emulations, e.g., what makes good ones? how do they scale (up or down)?
Testbeds and experimental infrastructure, e.g., tools, usage techniques, support for experimentation in emerging security topics (cyber-physical systems, wireless, etc.)
Experiences with cyber security education, e.g., capture-the-flag exercises, novel experimentation techniques used in education, novel ways to teach hands-on cyber security
Because of the complex and open nature of the subject matter, CSET '12 is designed to be a workshop in the traditional sense. Presentations are expected to be interactive with the expectation that a substantial amount of this time may be given to questions and audience discussion. Some papers will be given their own time slot of about 45 minutes, while similarly themed papers may be grouped together for discussion. Papers and presentations should be conducive to discussion, and the audience is encouraged to participate. To ensure a productive workshop environment, attendance will be limited to 80 participants.
Position papers, research papers, and extended abstracts are welcome as submissions. For all submissions, the program committee will give greater weight to papers that lend themselves to interactive discussion among attendees.
Research papers should have a separate section labeled "Methodology" in which the paper clearly identifies the research hypothesis and experiments designed to be proven or disproven. Submissions that recount experiences (e.g., from experiments or teaching) should have a section labeled "Lessons Learned" that discusses conclusions drawn from experience and generalized to other environments.
Extended abstracts and position papers, particularly those that are critiques of past work, should make certain to also include detailed proposed solutions.
Full position and research submissions must be 6–8 pages long including tables, figures, and references. Extended abstracts must be 2–4 pages long. Text should be formatted in two columns on 8.5" x 11" paper using 10 point type on 12 point leading ("single-spaced"), with the text block being no more than 6.5" wide by 9" deep. Text outside the 6.5" x 9" block will be ignored.
All submissions must be anonymized. Blind reviewing of full papers will be done by the program committee. Authors must make a good faith effort to anonymize their submissions, and they should not identify themselves either explicitly or by implication (e.g., through the references or acknowledgments). Submissions violating the detailed formatting and anonymization rules will not be considered for the workshop.
Submissions must be in PDF and must be submitted via the Web submission form.
All papers will be available online to registered attendees before the workshop. If your accepted paper should not be published prior to the event, please notify firstname.lastname@example.org. The papers will be available online to everyone beginning on the day of the workshop, August 6, 2012.
At least one author from every accepted paper must plan to attend the workshop and present.
Simultaneous submission of the same work to multiple venues, submission of previously published work, or plagiarism constitutes dishonesty or fraud. USENIX, like other scientific and technical conferences and journals, prohibits these practices and may take action against authors who have committed them. See the USENIX Conference Submissions Policy for details. Questions? Contact your program co-chairs, email@example.com, or the USENIX office, firstname.lastname@example.org.
Papers accompanied by nondisclosure agreement forms will not be considered. Accepted submissions will be treated as confidential prior to publication on the USENIX CSET '12 Web site; rejected submissions will be permanently treated as confidential.