Paper Submission Deadline: January 15, 2015
People collect dozens of photos and video clips with their smartphones, tablets, cameras, and such information is exchanged in a number of different ways. The growing number of sensors for capturing environmental conditions in the moment of content creation enriches data with context-awareness that allows capturing experiences and events of interest from a very rich personal perspective. This unveils an enormous potential for event-centred data analysis. The key idea is to use events as primary means for understanding, organizing and index content (e.g., photos, videos, news). Events have the ability to semantically encode relationships of different informational modalities. These modalities can include, but are not limited to: time, space, involved agents and objects, with the spatio-temporal component of events being a key feature for contextual analysis.
A variety of techniques have recently been presented to leverage contextual information for event-based analysis. Content-based only approaches have exhibited several limitations in the field of event analysis, especially for the event detection task. However, vision-based media analysis is important for object detection and recognition and can therefore play a significant role, which is complementary to that of event-driven context recognition.
The aim of this special issue is soliciting novel contributions in various aspects of event-based processing and analysis, with an emphasis on vision-based approaches taking also into account additional event-related contextual information. The convergence of aforementioned event analysis components, wrapped by appropriate state of the art human-computer interaction (HCI) technology can result to innovative applications useful in various sectors.
Both theoretical contributions and interesting applications validated on large-scale datasets are welcome. For the proposed methodologies, the authors are encouraged to provide quantitative comparison and performance evaluation.
The topics of this special issue include, but are not limited to:
Papers will be evaluated based on their originality, presentation, relevance and contribution to the field, as well as their suitability to this special issue, and for their overall quality. The submitted papers must be written in excellent English and describe original research which has not been published nor currently under review by other journals or conferences. Previously published conference papers should be clearly identified by the authors (at the submission stage) and an explanation should be provided how such papers have been extended to be considered for this special issue. Papers that either lack originality, clarity in presentation or fall outside the scope of the special issue will not be sent for review and the authors will be promptly informed in such cases. To encourage reproducible research, preference will be given to submissions accompanied by software that generates the results claimed in the manuscript.
Paper Submission: January 15, 2015
First Round Decisions: April 15, 2015
Revisions Deadline: June 30, 2015
Final Round Decisions: September 30, 2015
Online Publication: November 2015
All manuscripts and any supplementary material should be submitted through Elsevier Editorial System (EES). The authors must select ìSI: Event-based Media Procî when specifying the ìArticle Typeî in the submission process. The EES website is located at: http://ees.elsevier.com/imavis/.
The guide for Authors can be found on the journal homepage (http://www.elsevier.com/journals/image-and-vision-computing/0262-8856/guide-for-authors).
Bogdan Ionescu, University Politehnica of Bucharest, Romania (email@example.com)
Giulia Boato, University of Trento, Italy (firstname.lastname@example.org)
Zhigang Ma, Carnegie Mellon University, USA (email@example.com)
Yiannis Kompatsiaris, Centre for Research and Technology Hellas, Greece (firstname.lastname@example.org)
Nicu Sebe, University of Trento, Italy (email@example.com)
Shuicheng Yan, National University of Singapore (firstname.lastname@example.org)