posted by organizer: ltinnel || 4082 views || tracked by 6 users: [display]

LASER 2013 : Learning from Authoritative Security Experiment Results

FacebookTwitterLinkedInGoogle

Link: http://www.laser-workshop.org
 
When Oct 16, 2013 - Oct 17, 2013
Where Arlington, VA
Abstract Registration Due Sep 30, 2013
Submission Deadline Jun 27, 2013
Notification Due Aug 27, 2013
Final Version Due Nov 15, 2013
Categories    cyber security   experimental methods   security research   computer security
 

Call For Papers

With the increasing importance of computer security, the goal of this workshop is to help the security community quickly identify and learn from both success and failure. The workshop focuses on research that has a valid hypothesis and reproducible experimental methodology, but where the results were unexpected or did not validate the hypotheses, where the methodology addressed difficult and/or unexpected issues, or where unsuspected confounding issues were found in prior work.

Topics include, but are not limited to:

Unsuccessful research in experimental securit​y​
Methods and designs for security experiments
Experimental confounds, mistakes, and mitigations
Successes and failures reproducing experimental techniques and/or results
Hypothesis and methods development (e.g., realism, fidelity, scale)

The specific security results of experiments are of secondary interest for this workshop.

Journals and conferences typically publish papers that report successful experiments that extend our knowledge of the science of security or assess whether an engineering project has performed as anticipated. Some of these results have high impact; others do not.

Unfortunately, papers reporting on experiments with unanticipated results that the experimenters cannot explain, experiments that are not statistically significant, or engineering efforts that fail to produce the expected results are frequently not considered publishable because they do not appear to extend our knowledge. Yet, some of these “failures” may actually provide clues to even more significant results than the original experimenter had intended. The research is useful, even though the results are unexpected.

Useful research includes a well-reasoned hypothesis, a well-defined method for testing that hypothesis, and results that either disprove or fail to prove the hypothesis. It also includes a methodology documented sufficiently so that others can follow the same path. When framed in this way, “unsuccessful” research furthers our knowledge of a hypothesis and a testing method. Others can reproduce the experiment itself, vary the methods, and change the hypothesis, as the original result provides a place to begin.

As an example, consider an experiment assessing a protocol utilizing biometric authentication as part of the process to provide access to a computer system. The null hypothesis might be that the biometric technology does not distinguish between two different people; in other words, that the biometric element of the protocol makes the approach vulnerable to a masquerade attack. Suppose the null hypothesis is not rejected; it is still valuable to publish this result. First, it might prevent others from trying the same biometric method. Second, it might lead them to further develop the technology—to determine whether a different style of biometrics would improve matters, or if the environment in which authentication is being attempted makes a difference. For example, a retinal scan may be a failure in recognizing people in a crowd, but successful where the users present themselves one at a time to an admission device with controlled lighting, or when multiple “tries” are included. Third, it might lead to modifying the encompassing protocol so as to make masquerading more difficult for some other reason.

Equally important is research designed to reproduce the results of earlier work. Reproducibility is key to science as a way to validate earlier work or to uncover errors or problems in earlier work. Failure to reproduce the results leads to a deeper understanding of the phenomena that the earlier work uncovers.

Finally, many discussions about papers, proposals, and projects seek to explore previously tried strategies that failed, usually because published work does not exist. Old ideas are often pursued because the community is not aware of the prior failure. The workshop provides a venue that can help resolve this gap in the security community’s research literature.

Important Dates:

March 4 Start rolling consideration of 1-page structured abstracts
June 27 Full papers due
August 27 Authors notified of accepted/rejected full papers
September 23 Pre-conference versions of full papers due
September 30 End rolling consideration of 1-page structured abstracts
October 16-17​ ​2013 LASER Workshop
November 15 Post-conference versions of full papers due


Both full papers and structured abstracts are solicited. Full papers follow a typical pattern of submission, review, notification, pre-conference version, conference presentation, and final post-conference version. One-page structured abstracts serve two purposes: (1) to enable authors to receive early feedback prior to investing significant effort writing papers, and (2) to provide all attendees a forum to share an abstract of their work before the workshop.

Abstracts will be reviewed by at least two PC members with comments returned in 5-10 days; submissions before June 27 will receive an “encouraged,” “neutral,” or “discouraged” indication for submission of a full paper based on the abstract. The pre-submission feedback is for the author’s use only. All abstracts deemed relevant by the PC will be available on the laser-workshop.org website before the conference, but they will not be part of the proceedings.

A structured abstract is typically 200-500 words and less than one page. It includes at least these elements: background, aim, method, results, and conclusions. See the workshop website for more details. The abstracts for full papers should be similarly structured.

Full paper submissions should be 6–10 pages long including tables, figures, and references. All submissions should use the ACM Proceedings format: http://www.acm.org/sigs/publications/proceedings-templates (Option 1, if using LaTeX). At least one author from every accepted full paper must plan to attend the workshop and present. All papers and abstracts must be submitted via OpenConf https://www.openconf.org/laser2013.
​​
The LASER workshop is funded in part by NSF Grant #1143766 and by the Applied Computer Security Associates (ACSA).

See www.laser-workshop.org for full and up-to-date details on the workshop.

Please direct all questions to info@laser-workshop.org.

Organizing Committee:

Laura Tinnel (SRI International), General Chair
Greg Shannon (CMU/CERT), Program Co-Chair
Tadayoshi Kohno (U Wash), Program Co-Chair
Christoph Schuba (Oracle), Proceedings
Carrie Gates (CA Technologies), Treasurer
David Balenson (SRI International), Local Arrangements
Ed Talbot (Consultant), Publicity


Program Committee:


Greg Shannon (CMU/CERT), Co-Chair
Tadayoshi Kohno (U Wash), Co-Chair
David Balenson (SRI International)
Matt Bishop (UC Davis)
Joseph Bonneau (Google)
Pete Dinsmore (JHU/APL)
Debin Gao (Singapore Management University)
Carrie Gates (Computer Associates Technologies)
Alefiya Hussain (USC/ISI)
Carl Landwehr (George Washington University)
Sean Peisert (UC Davis and LBNL)
Angela Sasse (University College London)
Christoph Schuba (Oracle)
Ed Talbot (Consultant)
Laura Tinnel (SRI International)
Kami Vaniea (Michigan State)
Charles Wright (Portland State)


Submission Guidelines: Background and Purpose

In his keynote speech at the 2012 LASER workshop, Dr. Roy Maxion of Carnegie Mellon University suggested a simple way to help improve the quality of experimental research for cyber security: to begin requiring that papers follow a specific structure, including a structured abstract that concisely and clearly summarizes the whole story of the work detailed in the paper.

As a result of round table discussion at the LASER workshop, the organizing committee decided to try this approach in the 2013 workshop. To that end, we are inviting structured abstracts for comment and review to help authors refine their abstracts prior to developing full papers. Workshop participants who do not have a paper in the workshop are also encouraged to take advantage of the abstract submission and review process to help them improve their abstract development skills. We will continue to accept and review abstracts after the official paper deadline. All abstracts deemed relevant by the PC will be available on the laser-workshop.org website before the conference, but they will not be part of the proceedings.

Researchers should not have to read a whole paper to determine what the research described in the paper is about. The idea behind a structured abstract is to avoid that problem by giving the reader a concise summary of the whole study and reducing his overall cognitive load. We also recommend using bold face as section headers in the abstract to make it easier on the reader. Finally, a well-written structured abstract will provide a good outline for an author to use in developing a full paper and for conducting meta-analyses.

Structured Abstract Guidelines

Abstracts should be 200-500 words and less than one page in length. They should contain concise statements that tell the whole story of the study, presented in a consistent structure that facilitates quick assessment as to whether or not the paper may meet the reader's needs and warrant reading the full paper. Essential elements of structured abstracts are: background, aim, method, results, and conclusions.

Background. State the background and context of the work described in the paper.

Aim. State the research question, objective, or purpose of the work in the paper.

Method. Briefly summarize the method used to conduct the research, including the subjects, procedure, data, and analytical method.

Results. State the outcome of the research using measures appropriate for the study conducted. Results are essentially the numbers.

Conclusions. State the lessons learned as a result of the study and recommendations for future work. The conclusions are the "so what" of the study.


By using this format for an abstract, the author has a good structure not only for his or her paper but also for creating slides to present the work.

Here is an example abstract from the below citation (140 words) of a LASER 2012 paper:

Kevin S. Killourhy and Roy A. Maxion. 2012. Free vs. transcribed text for keystroke-dynamics evaluations. In Proc. of the 2012 Workshop on Learning from Authoritative Security Experiment Results (LASER '12). ACM, New York, NY, USA, 1-8.

Background. One revolutionary application of keystroke dynamics is continuous reauthentication: confirming a typist’s identity during normal computer usage without interrupting the user.

Aim. In laboratory evaluations, subjects are typically given transcription tasks rather than free composition (e.g., copy- ing rather than composing text), because transcription is easier for subjects. This work establishes whether free and transcribed text produce equivalent evaluation results.

Method. Twenty subjects completed comparable transcription and free-composition tasks; two keystroke-dynamics classifiers were implemented; each classifier was evaluated using both the free-composition and transcription samples.

Results. Transcription hold and keydown-keydown times are 2–3 milliseconds slower than free-text features; tests showed these effects to be significant. However, these effects did not significantly change evaluation results.

Conclusions. The additional difficulty of collecting freely composed text from subjects seems unnecessary; researchers are encouraged to continue using transcription tasks.


Paper Guidelines

Format

Full papers should be 6-10 US letter sized pages (8.5"x11"), inclusive of tables, figures, and references. They should follow the latest ACM Proceedings Format (updated May 2013) and be submitted using Option 2, WITH the permission block. Papers must comply with the template margins and BE SUBMITTED IN PDF FORMAT.

Please include page numbers on submitted papers to aid reviewers. Page numbers should be excluded from final camera-ready papers. Additional guidelines for camera-ready papers will be provided to accepted paper authors.

Content

Full papers should provide details sufficient that a reviewer can determine the validity of the experiment(s) conducted and repeat the experiment, if so desired. In addition to the title and author, suggested section headings are:

Structured abstract (following the above guidelines)
Introduction
​​Background and related work
Aim - Problem being Solved
Approach
Method (to include apparatus/instrumentation, materials, subjects/objects, instructions given to subjects, design, and procedure)
Data
Analysis
Results
Discussion
Limitations
Conclusion
Future
Acknowledgements
References
Appendices
Endnotes and footnotes


By using a predictable structure for content, the author is helping his readers because they know what to expect in each section. Further, it is easier for researchers to read sections in the order they choose and also to find something particular in the paper after reading.

Related Resources

Insights 2024   Fifth Workshop on Insights from Negative Results in NLP
CSW 2024   2024 3rd International Conference on Cyber Security
DataMod 2024   12th International Symposium DataMod 2024: From Data to Models and Back
Hacktivity 2024   Hacktivity 2024 - IT Security Festival
LAJC 2024   Latin-American Journal of Computing
Security 2025   Special Issue on Recent Advances in Security, Privacy, and Trust
SPISCS 2024   2024 3rd International Conference on Signal Processing, Information System and Cyber Security (SPISCS 2024)
MLTEEF 2024   Machine Learning Technologies on Energy Economics and Finance
ICMLA 2024   23rd International Conference on Machine Learning and Applications
VSI-HFCS 2024   Computer Standards & Interfaces Special Issue 2024 - Human Factors in Cyber Security