posted by organizer: bauerc || 910 views || tracked by 2 users: [display]

MuMe 2018 : 1st International Workshop on Multi-Method Evaluation of Personalized Systems


When Jul 8, 2018 - Jul 8, 2018
Where Singapore
Submission Deadline Apr 17, 2018
Notification Due May 15, 2018
Final Version Due May 27, 2018
Categories    computer science   evaluation   recommender systems   user-centric

Call For Papers

1st International Workshop on Multi-Method Evaluation of Personalized Systems (MuMe 2018)

held in conjunction with UMAP 2018 (User Modeling, Adaptation and Personalization)
8 - 11 July, 2018 at Nanyang Technological University, Singapore

The MuMe 2018 workshop is based on the objective to raise awareness in the user modeling community for the significance of using multiple methods in the evaluation of recommender systems and other personalized systems.

Employing a multi-method evaluation integrating a number of single methods (e.g., a combination of think-aloud and survey with open-ended questions or a combination of offline prediction simulation with an open dataset and survey with closed- and open-ended questions) allows for getting a more integrated and richer picture of user experience and quality drivers of personalized systems.

The primary goal of this workshop is to build a community around the multi-method evaluation topic and to develop a long-term research agenda for the topic.

We solicit position and research papers (4 pages excluding references, UMAP 2018 Extended Abstracts Format) that address challenges in the multi-method evaluation of recommender systems and other personalized systems. This includes

- "lessons learned" from the successful application of multi-method evaluations,
- "post mortem" analyses describing specific evaluation strategies that failed to uncover decisive elements,
- "overview papers" analyzing patterns of challenges or obstacles to multi-method evaluation, and
- "solution papers" presenting solutions towards identified challenges.

Possible questions addressed may include (but are not limited to):

- How can we select evaluation methods that allow to identify blind spots in user experience? What may be criteria to compare and evaluate the suitability of methods for given evaluation objectives and how can we develop those?
- How can we integrate and combine the results of multiple methods to get a comprehensive picture of user experience?
- What are the challenges and limitations of single- or multi-method evaluation of RecSys? How can we overcome such hurdles?
- What are viable user-centric multi-method study designs (guidelines) for evaluating RecSys? What are the lessons learned from successful or unsuccessful user-centric multi-method study designs?

Important Dates
Submission deadline: April 17, 2018
Notification: May 15, 2018
Deadline for camera ready version: May 27, 2018
Workshop date: July 8, 2018
(all deadlines are AoE)

Christine Bauer, Johannes Kepler University Linz, Austria
Eva Zangerle, University of Innsbruck, Austria
Bart P. Knijnenburg, Clemson University, USA

For details visit the workshop’s website:

Related Resources

ISCSAI 2018   2018 International Symposium on Computer Science and Artificial Intelligence
ENASE 2019   14th International Conference on Evaluation of Novel Approaches to Software Engineering
IJIST 2018   The International Journal of Information Science & techniques
MUME 2018   The 6th International Workshop on Musical Metacreation
ACIIDS 2018   10th Asian Conference on Intelligent Information and Database Systems
CENTRIC 2018   The Eleventh International Conference on Advances in Human-oriented and Personalized Mechanisms, Technologies, and Services
IJRAP 2018   International Journal of Recent advances in Physics
LREC 2018   Language Resources and Evaluation Conference
ICDCS 2019   International Conference on Distributed Computing Systems
ICSC 2019   IEEE International conference on semantic computing