DLS 2018 : 1st Deep Learning and Security Workshop
Call For Papers
====== 1st Deep Learning and Security Workshop =======
Thursday, May 24, 2018
co-located with the 39th IEEE Symposium on Security and Privacy
= Important Dates
* Paper Submissions Deadline: January 5, 2018 (extended from Dec.22, 2017)
* Acceptance Notice to Authors: February 15, 2018 (tentative)
* Camera ready papers: March 4, 2018
* Workshop date: Thursday, May 24, 2018
Over the past decade, machine learning methods have found their way into a large variety of computer security applications, including accurate spam detection, scalable discovery of new malware families, identifying malware download events in vast amounts of web traffic, detecting software exploits, blocking phishing web pages, and preventing fraudulent financial transactions, just to name a few.
At the same time, machine learning methods themselves have evolved. In particular, Deep Learning methods have recently demonstrated great improvements over more "traditional" learning approaches on a number of important tasks, including image and audio classification, natural language processing, machine translation, etc. Moreover, areas such as program induction and neural abstract machines have made it possible to generate and analyze programs in depth. It is therefore natural to ask how the success of these deep learning methods may be ported to advancing the state-of-the-art in security applications.
This workshop is aimed at academic and industrial researchers interested in the application of deep learning methods to computer security problems. Some of the key research questions of interest will include the following:
* What are the strengths and shortcomings of current learning methods for representing and/or detecting security threats?
* Can deep learning methods be successfully applied to security applications?
* Can deep learning help to develop more efficient malware analysis by building a more accurate representation of program behaviors?
* What are the challenges involved, and will the use of deep learning methods significantly improve over previous results?
* Can deep learning methods better cope with problems related to learning in adversarial environments?
* What are the big, open problems in threat representation, especially for the detection of malicious software?
* How can generative models improve our understanding and detection of threats?
Topics of interest include (but are not limited to):
* Deep learning architectures
* Deep NLP (natural language processing)
* Recurrent networks architectures
* Effective feature embedding
* Neural networks for graphs
* Generative adversarial networks
* Deep reinforcement learning
* Relational modeling and prediction
* Semantic knowledge-bases
* Neural abstract machines and program Induction
* Program representation
* Malware identification, analysis and similarity
* Detecting malicious software downloads at scale
* Representation and detection of social engineering attacks
* Botnet detection
* Intrusion detection and response
* Spam and phishing detection
* Classification of sequences of system/network events
* Security in social networks
* Application of learning to computer forensics
* Learning in adversarial environments
= Workshop Format
The workshop invites two types of submissions: full research papers and extended abstracts. Full papers are expected to present completed work and will be published in the workshop’s IEEE proceedings. On the other hand, extended abstract submissions are intended to encourage the presentation of preliminary research ideas or case studies around challenges and solutions related to the use of deep learning systems in real-world security applications. While accepted extended abstracts will not be part of the formal IEEE proceedings, they will be preserved as an online open publication (e.g., on arxiv.org) and the authors will be free to submit an extended version of their work to other venues.
One author of each accepted paper is expected to present the submitted work at the workshop. Paper presentations will follow the traditional conference-style format with questions from the audience. More information on available speaking slots and workshop format details will be provided ahead of the workshop date.
= Instructions for Submission
To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.
Full research papers must be no longer than six pages, plus one page for references.
Extended abstract submissions must be no longer than four pages, plus one page for references.
Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review.
Submissions must be in Portable Document Format (.pdf). Authors should pay special attention to unusual fonts, images, and figures that might create problems for reviewers. Submitted documents should render correctly in Adobe Reader and when printed in black and white.
For further details on the submission process, please visit the workshop website: https://www.ieee-security.org/TC/SPW2018/DLS/
For any questions, contact the workshop organizers at: email@example.com.
** Workshop Chair
Nikolaos Vasiloglou - Symantec Center for Advanced Machine Learning
** Program Committee Chair
Roberto Perdisci - University of Georgia and Georgia Tech
** Program Committee Co-Chairs
Babak Rahbarinia - Auburn University at Montgomery
Andrew Gardner - Symantec Center for Advanced Machine Learning
** Steering Committee
Dawn Song - EECS UC Berkeley
Ian Goodfellow - Google Brain
Wenke Lee - Georgia Institute of Technology
** Technical Program Committee
Alexandros Dimakis - University of Texas at Austin
Alvaro Cardenas - University of Texas at Dallas
Alina Oprea - Northeastern University
Baris Coskun - Amazon Web Services
Battista Biggio - University of Cagliari, Italy
Benjamin Rubinstein - University of Melbourne, Australia
Bo Li - EECS UC Berkeley
Christos Dimitrakakis - Chalmers University of Technology, Sweden
Giorgio Giacinto - University of Cagliari, Italy
Javier Echauz - Symantec Research
Kang Li - University of Georgia
Kevin Roundy - Symantec Research
Konrad Rieck - TU Braunschweig, Germany
Lorenzo Cavallaro - Royal Holloway University, London
Neil Zhenqiang Gong - Iowa State University
Nicolas Papernot - Pennsylvania State University
Philip Tully - ZeroFOX
Polo Chau - Georgia Institute of Technology
Sai Deep Tetali - Google
Tummalapalli S. Reddy - Kayak Software
Yinzhi Cao - Lehigh University
Yizheng Chen - Baidu
Yufei Han - Symantec Research