|
| |||||||||||||||
NOVAS 2026 : NOVAS Workshop at VLDB 2026 | |||||||||||||||
| Link: http://www.novasworkshop.org | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
|
********************************************************************
NOVAS Workshop @ VLDB'26 - Deadline June 13 ******************************************************************** IMPORTANT DATES All deadlines are 11:59PM PST. Submission deadline: 13 June 2026 Author notification: 13 July 2026 Camera-ready version: 31 July 2026 Workshop day: 31 August 2026 Recent advances in large language models (LLMs) have enabled a new generation of AI-powered systems and data management architectures, where reasoning, semantics, and learning are first-class components of data processing. The NOVAS workshop asks: How should system architectures, execution models, and optimization techniques evolve when LLM inference becomes a core system primitive? What are the right abstractions, optimization strategies, and benchmarks for serving LLM-powered workloads efficiently at scale? How do we trade off performance, cost, energy, and accuracy when data systems integrate reasoning, retrieval, and multi-agent execution? We invite work and early ideas that address these questions through system design, optimization, or theoretical analysis, including contributions that may fall outside traditional database or ML categories but offer clear system-level insights. Topics of particular interest for the workshop include, but are not limited to: Declarative and multi-agent systems for large-scale, agentic data processing Implementation and optimization for semantic operations, including semantic joins, semantic aggregations, semantic filters Multimodal question answering and data processing DB-inspired techniques to optimize workloads of hybrid relational-AI queries System-level methods for efficient LLM serving: performance, energy, and cost trade-offs New model architectures for relational data processing (e.g. relational transformers) Vector databases for embeddings in RAG systems Benchmarks for data processing tasks using LLMs The workshop aims at bringing together researchers and practitioners in the areas of data management, generative machine learning, and information retrieval. For any questions regarding the workshop please contact us at chairs@novasworkshop.org SUBMISSION GUIDELINES The workshop will accept regular and short papers. We welcome short papers that present exciting work in progress, dataset contributions, or visionary/outrageous ideas. All papers have to be submitted in single anonymous format, and must be prepared in accordance with the PVLDB template available at https://vldb.org/pvldb/volumes/19/formatting. All submissions to the workshop must adhere to the diversity and inclusion writing guidelines from PVLDB and ACM. The following are the page limits (excluding references): Regular papers: 6 pages Short papers: 4 pages All submissions (in PDF format) should be sent to OpenReview. Submission link: https://openreview.net/group?id=VLDB.org/2026/Workshop/NOVAS REVIEWING PROCESS: Submissions will be single blind: authors cannot see reviewer names, but reviewers can see author names. We use OpenReview to host papers and the reviewing process will be public. This means that reviewers' comments can be seen by all, during the submission and for accepted papers after decision, although the reviewers' identity will remain anonymous. Conflicts of Interests (COIs) are handled using the same rules as VLDB 2026. The use of LLMs is allowed as a general-purpose assist tool. Authors and reviewers should understand that they take full responsibility for the contents written under their name, including content generated by LLMs that could be construed as plagiarism or scientific misconduct (e.g., fabrication of facts). LLMs are not eligible for authorship. ORGANIZATION Gerardo Vitagliano, MIT Chunwei Liu, Purdue University Liana Patel, Stanford University Rana Shahout, Harvard University Andreas Kipf, University of Technology Nuremberg Paolo Papotti, EURECOM |
|