posted by system || 3564 views || tracked by 7 users: [display]

SSCS 2010 : ACM Multimedia 2010 Workshop on Searching Spontaneous Conversational Speech

FacebookTwitterLinkedInGoogle

Link: http://www.searchingspeech.org
 
When Oct 29, 2010 - Oct 29, 2010
Where Firenze, Italy
Submission Deadline Jun 10, 2010
Notification Due Jul 10, 2010
Final Version Due Jul 20, 2010
Categories    multimedia   information retrieval   speech
 

Call For Papers

----------------------------------------------------------------------
CfP: ACM Multimedia 2010 Workshop on
Searching Spontaneous Conversational Speech (SSCS 2010)
-----------------------------------------------------------------------
Workshop held on 29 October 2010, in Firenze, Italy
in conjunction with ACM Multimedia 2010

Website: http://www.searchingspeech.org/

The SSCS 2010 workshop is devoted to presentation and discussion of recent research results concerning advances and innovation in the area of spoken content retrieval and the area of multimedia search that makes use of automatic speech recognition technology.

Spoken audio is a valuable source of semantic information, and speech analysis techniques, such as speech recognition, hold high potential to improve information retrieval and multimedia search. Nonetheless, speech technology remains underexploited by multimedia systems, in particular, by those providing access to multimedia content containing spoken audio. Early success in the area of broadcast news retrieval has yet to be extended to application scenarios in which the spoken audio is unscripted, unplanned and highly variable with respect to speaker and style characteristics. The SSCS 2010 workshop is concerned with a wide variety of challenging spoken audio domains, including: lectures, meetings, interviews, debates, conversational broadcast (e.g., talkshows), podcasts, call center recordings, cultural heritage archives, social video on the Web and spoken natural language queries. As speech steadily moves closer to rivaling text as a medium for access and storage of information, the need for technologies that can effectively make use of spontaneous conversational speech to support search becomes more pressing.

In order to move the use of speech and spoken content in retrieval applications and multimedia systems beyond the current state of the art, sustained collaboration of researchers in the areas of speech recognition, audio processing, multimedia analysis and information retrieval is necessary. Motivated by the aim of providing a forum where these disciplines can engage in productive interaction and exchange, Searching Spontaneous Conversational Speech (SSCS) workshops were held in conjunction with SIGIR 2007, SIGIR 2008 and ACM Multimedia 2009. The SSCS workshop series continues at ACM Multimedia 2010 with a focus on research that strives to move retrieval systems beyond conventional queries and beyond the indexing techniques used in traditional mono-modal settings or text-based applications.

We welcome contributions on a range of trans-disciplinary research issues related to these research challenges, including:

- Information Retrieval techniques in the speech domain (e.g., applied to speech recognition lattices)
- Multimodal search techniques exploiting speech transcripts (audio/speech/video fusion techniques including re-ranking)
- Search effectiveness (e.g., evidence combination, query/document expansion)
- Exploitation of audio analysis (e.g., speaker?s emotional state, speaker characteristics, speaking style)
- Integration of higher level semantics, including topic segmentation and cross-modal concept detection
- Spoken natural language queries
- Large-scale speech indexing approaches (e.g., collection size, search speed)
- Multilingual settings (e.g., multilingual collections, cross-language access)
- Advanced interfaces for results display and playback of multimedia with a speech track
- Exploiting user contributed information, including tags, rating and user community structure
- Affordable, light-weight solutions for small collections, i.e., for the long tail

Contributions for oral presentations (short papers of 4 pages or long papers of 6 pages) and demonstration papers (4 pages) will be accepted. The submission deadline is 10 June 2010. For further information see the website: http://www.searchingspeech.org/

At this time, we area also pre-announcing a special issue of ACM Transactions on Information Systems on the topic of searching spontaneous conversational speech. The special issue is based on the SSCS workshop series, but will involve a separate call for papers. We will especially encourage the authors of the best papers from SSCS 2010 to submit to the special issue call.

SSCS 2010 Organizers
Martha Larson, Delft University of Technology, Netherlands
Roeland Ordelman, Sound & Vision and Uni. of Twente, Netherlands
Florian Metze, Carnegie Mellon University, USA
Franciska de Jong, University of Twente, Netherlands
Wessel Kraaij, TNO and Radboud University, Netherlands

Related Resources

CSL VSI Conversational AI 2024   Computer Speech and Language Special Issue on Conversational AI and Dialog Systems
SPIE-Ei/Scopus-CVCM 2024   2024 5th International Conference on Computer Vision, Communications and Multimedia (CVCM 2024) -EI Compendex
FLAIRS - SLP 2024   Spoken Language Processing and Conversational Systems Special Track
SPIE-Ei/Scopus-ITNLP 2024   2024 4th International Conference on Information Technology and Natural Language Processing (ITNLP 2024) -EI Compendex
SMC 2024   8th International Conference on Soft Computing, Mathematics and Control
MMAsia 2024   ACM Multimedia Asia 2024
VSIP 2024   ACM--2024 The 6th International Conference on Video, Signal and Image Processing (VSIP 2024)
ECNLPIR 2024   2024 European Conference on Natural Language Processing and Information Retrieval (ECNLPIR 2024)
ICASSP 2024   2024 IEEE International Conference on Acoustics, Speech and Signal Processing
SESBC 2024   5th International Conference on Software Engineering, Security and Blockchain