ASSESS 2014 : Data Mining for Educational Assessment and Feedback
Call For Papers
Assessing educational achievement and providing feedback to learners is crucial to emerging course work, MOOCs and labor market-making systems. Automating the assessment and feedback mechanisms on open-ended tasks (e.g., short-answer and essay questions) will allow both learning and recruitment to be opened up to a much larger set of people that currently miss out due to resource constraints. Although automated or semi-automated assessment/feedback remains a nascent field, there have been recent advances drawing on techniques from data mining, knowledge discovery, machine learning, and crowdsourcing for peer grading. Notwithstanding, the technical requirements for accuracy and expressiveness for the many purposes for assessment (some high-stakes and some low-stakes; some formative and some summative) are not well-defined.
In this workshop, we will bring together a diverse group of researchers and industry practitioners in data mining, machine learning, and psychology to (1) discuss current state of the art in automated assessment, (2) identify a vision for future research, and (3) lay out a vision for the steps required to build a good automated or peer-grading based assessment. The organizers hope that a unified framework for thinking about assessment and feedback emerges by the end of workshop.
Call for papers
We invite submission of papers describing innovative research on all aspects of assessment of educational achievement and providing feedback. Position papers and papers inducing discussion are strongly encouraged, but should base themselves on fresh insight and/or new data. Topics of interest include, but are not limited to:
- Automated and semi-automated assessments of open-responses
- Assessment by crowdsourcing and peer-grading
- Automatic feedback generation
- Automatic problem (item) generation/design, calibration, modeling
- Assessment design, test blue-print, rubric design, validity and reliability
- Test integrity, equity, proctoring and high-stake testing
- Non-cognitive assessments in personality, behavior, motivation, etc.
- New areas: gamification, simulation based assessments, etc.
- Insights derived by large scale assessment data towards learning, recruitment, performance prediction, etc.
Papers submitted should be original work and different from papers that have been previously published or are under review in a journal or another conference/workshop.
Accepted papers shall be provided a speaking slot at the workshop.
Paper submission deadline June 15th, 2014, 23:59 Pacific Standard Time
Acceptance notification July 05, 2014
Final paper due July 15, 2014
Damian Bebell, Boston College
Sumit Gulwani, Microsoft Research
Eliana Feasley, Khan Academy
Donovan Stevens, edX
(More to be decided)
Divyanshu Vats, Postdoctoral Fellow, Rice University
Lav R. Varshney, Assistant Professor, University of Illinois at Urbana-Champaign
Steven E. Stemler, Associate Professor, Wesleyan University
Varun Aggarwal, CTO, Aspiring Minds