GPCE 2014 : Generative Programming: Concepts and Experiences
Conference Series : Generative Programming and Component Engineering
Call For Papers
Generative and component approaches and domain-specific abstractions are revolutionizing software development just as automation and componentization revolutionized manufacturing. Raising the level of abstraction in software specification has been a fundamental goal of the computing community for several decades. Key technologies for automating program development and lifting the abstraction level closer to the problem domain are Generative Programming for program synthesis, Domain-Specific Languages (DSLs) for compact problem-oriented programming notations, and corresponding Implementation Technologies aiming at modularity, correctness, reuse, and evolution. As the field matures Applications and Empirical Results are of increasing importance.
The International Conference on Generative Programming: Concepts & Experiences (GPCE) is a venue for researchers and practitioners interested in techniques that use program generation, domain-specific languages, and component deployment to increase programmer productivity, improve software quality, and shorten the time-to-market of software products. In addition to exploring cutting-edge techniques of generative software, our goal is to foster further cross-fertilization between the software engineering and the programming languages research communities.
GPCE seeks contributions on all topics related to generative software and its properties. As technology is maturing, this year, we are particularly looking for empirical evaluations in this context. Key topics include (but are certainly not limited too):
(language extension, language embedding, language design, language theory, language workbenches, interpreters, compilers)
(domain engineering, feature-oriented and aspect-oriented programming, preprocessors, feature interactions)
(reflection, staging, partial evaluation)
Implementation techniques and tool support
(components, plug-ins, libraries, metaprogramming, macros, templates, generic programming, run-time code generation, model-driven development, composition tools)
Properties of generative software
Correctness of generators and generated code
(analysis, testing, formal methods, domain-specific error messages, safety, security)
Reuse and evolution
Modularity, separation of concerns, understandability, and maintainability
Performance engineering, nonfunctional properties
(program optimization and parallelization, GPGPUs, multicore, footprint, metrics)
Application areas and engineering practice
(distributed systems, middleware, embedded systems, patterns, development methods)
Empirical evaluations of all topics above
(user studies, substantial case studies, controlled experiments, surveys, rigorous measurements)
We particularly welcome papers that address some of the key challenges in field, for example
Synthesizing code from declarative specifications
Supporting extensible languages and language embedding
Ensuring correctness and other nonfunctional properties of generated code; proving generators correct
Improving error reporting with domain-specific error messages
Reasoning about generators; handling variability-induced complexity in product lines
Providing efficient interpreters and execution languages
Human factors in developing and maintaining generators
Note on empirical evaluations: GPCE is committed to the empirical evaluation of generative software. Publishing empirical papers at programming-language venues can be challenging. We understand the frustration of authors when, for example, reviews simply recommend repeating entire experiments with human subjects with slight deviations in execution. To alleviate such problems, we have recruited forto program committee experts who routinely work with empirical methods, and we will actively seek external reviews where appropriate. During submissions, authors can optionally indicate that a paper contains substantial empirical work, and we will endeavor have to the paper reviewed by experts familiar with the empirical research methods that are used in the paper. The program-committee discussions will reflect on both technical contributions and research methods. For more context, see also _Hints for Reviewing Empirical Work in Software Engineering_.
Policy: Incremental improvements over previously published work should have been evaluated through systematic, comparative, empirical, or experimental evaluation. Submissions must adhere to SIGPLAN's republication policy (http://www.sigplan.org). Please contact the program chair if you have any questions about how this policy applies to your paper (email@example.com).
Submitted articles must not have been previously published or currently be submitted for publication elsewhere. The program chairs will apply the principles of the ACM Plagiarism Policy throughout the submission and review process.