posted by system || 3199 views || tracked by 7 users: [display]

SSCS 2010 : ACM Multimedia 2010 Workshop on Searching Spontaneous Conversational Speech

FacebookTwitterLinkedInGoogle

Link: http://www.searchingspeech.org
 
When Oct 29, 2010 - Oct 29, 2010
Where Firenze, Italy
Submission Deadline Jun 10, 2010
Notification Due Jul 10, 2010
Final Version Due Jul 20, 2010
Categories    multimedia   information retrieval   speech
 

Call For Papers

----------------------------------------------------------------------
CfP: ACM Multimedia 2010 Workshop on
Searching Spontaneous Conversational Speech (SSCS 2010)
-----------------------------------------------------------------------
Workshop held on 29 October 2010, in Firenze, Italy
in conjunction with ACM Multimedia 2010

Website: http://www.searchingspeech.org/

The SSCS 2010 workshop is devoted to presentation and discussion of recent research results concerning advances and innovation in the area of spoken content retrieval and the area of multimedia search that makes use of automatic speech recognition technology.

Spoken audio is a valuable source of semantic information, and speech analysis techniques, such as speech recognition, hold high potential to improve information retrieval and multimedia search. Nonetheless, speech technology remains underexploited by multimedia systems, in particular, by those providing access to multimedia content containing spoken audio. Early success in the area of broadcast news retrieval has yet to be extended to application scenarios in which the spoken audio is unscripted, unplanned and highly variable with respect to speaker and style characteristics. The SSCS 2010 workshop is concerned with a wide variety of challenging spoken audio domains, including: lectures, meetings, interviews, debates, conversational broadcast (e.g., talkshows), podcasts, call center recordings, cultural heritage archives, social video on the Web and spoken natural language queries. As speech steadily moves closer to rivaling text as a medium for access and storage of information, the need for technologies that can effectively make use of spontaneous conversational speech to support search becomes more pressing.

In order to move the use of speech and spoken content in retrieval applications and multimedia systems beyond the current state of the art, sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval is necessary. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007, SIGIR 2008 and ACM Multimedia 2009. The SSCS workshop series continues at ACM Multimedia 2010 with a focus on research that strives to move retrieval systems beyond conventional queries and beyond the indexing techniques used in traditional mono-modal settings or text-based applications.

We welcome contributions on a range of trans-disciplinary research issues related to these research challenges, including:

- Information Retrieval techniques in the speech domain (e.g., applied to speech recognition lattices)
- Multimodal search techniques exploiting speech transcripts (audio/speech/video fusion techniques including re-ranking)
- Search effectiveness (e.g., evidence combination, query/document expansion)
- Exploitation of audio analysis (e.g., speaker?s emotional state, speaker characteristics, speaking style)
- Integration of higher level semantics, including topic segmentation and cross-modal concept detection
- Spoken natural language queries
- Large-scale speech indexing approaches (e.g., collection size, search speed)
- Multilingual settings (e.g., multilingual collections, cross-language access)
- Advanced interfaces for results display and playback of multimedia with a speech track
- Exploiting user contributed information, including tags, rating and user community structure
- Affordable, light-weight solutions for small collections, i.e., for the long tail

Contributions for oral presentations (short papers of 4 pages or long papers of 6 pages) and demonstration papers (4 pages) will be accepted. The submission deadline is 10 June 2010. For further information see the website: http://www.searchingspeech.org/

At this time, we area also pre-announcing a special issue of ACM Transactions on Information Systems on the topic of searching spontaneous conversational speech. The special issue is based on the SSCS workshop series, but will involve a separate call for papers. We will especially encourage the authors of the best papers from SSCS 2010 to submit to the special issue call.

SSCS 2010 Organizers
Martha Larson, Delft University of Technology, Netherlands
Roeland Ordelman, Sound & Vision and Uni. of Twente, Netherlands
Florian Metze, Carnegie Mellon University, USA
Franciska de Jong, University of Twente, Netherlands
Wessel Kraaij, TNO and Radboud University, Netherlands

Related Resources

MuSe 2022   The 3rd International Multimodal Sentiment Analysis Challenge and Workshop @ ACM Multimedia 2022, October 2022, Lisbon, Portugal
CC 2022   Conversational Computing
CIKM 2022   Call for Applied Research Papers (CIKM 2022)
ACM ICVIP 2022   ACM--2022 The 6th International Conference on Video and Image Processing (ICVIP 2022)
ECNLPIR 2022   2022 European Conference on Natural Language Processing and Information Retrieval (ECNLPIR 2022)
IMuR 2022   2nd International Workshop on Interactive Multimedia Retrieval
AIMLNET 2022   2nd International conference on AI, Machine Learning in Communications and Networks
DVU 2022   Deep Video Understanding Grand Challenge, ACM MM 2022
ICASSP 2022   2022 IEEE International Conference on Acoustics, Speech, & Signal Processing