The SocialSensor Project - Sensing User Generated Input for Improved Media Discovery and Experience

Sotiris Diplaris, Symeon Papadopoulos, Yiannis Kompatsiaris
Information Technologies Institute

Centre for Research and Technology Hellas

Thessaloniki
, Greece
{diplaris, papadop, ikom}@iti.gr

ABSTRACT
The SocialSensor project builds a new framework for real-time multimedia indexing and search across multiple social media sources. SocialSensor places particular emphasis on the real-time, social and contextual aspects of content and information consumption.  It therefore integrates content mining, search and intelligent presentation in a personalised, context and network-aware way, based on aggregation and indexing of user-generated multimedia online content. The project will deliver practical tools that incorporate novel user-centric media search and browsing methods, showcasing them in two distinct use cases: News and Infotainment.


1 INTRODUCTION

The latest developments in the use of the Web and mobile devices have transformed the way that media content is created, edited and distributed. Media content is created and published online at unprecedented rates by both regular users and professional organisations. The wide availability of smart-phones has enabled the creation and instant sharing of media content at the time and place of an event. At the same time, social networks have become an integral part of modern life driving more and faster communication than ever before.

In this context, the challenge for traditional information providers is to use and embrace these new content generation and sharing methods, and the channels offered by social media and mobile technologies, to their fullest advantage, within both information gathering and information distribution. A key challenge in this respect is to develop appropriate tools for quickly surfacing trends, sentiments and discussions in relevant and useful ways.

To get the most out of the content shared through social networks, a number of problems are still open: (a) Sensing, discovery of trending topics and what is "up and coming" in order to guide further investigation; (b) Analysis of trends and events with respect to specific questions; (c) Filtering, drilling down to relevant content according to particular needs and interests; (d) Verification, ensuring that the content posted in social networks is accurate; (e) Visualisation, presenting search results in attractive and intuitive ways; (f) Aggregation, enabling searches across different social media platforms; (g) Speed, obtaining the desired results quickly and efficiently, without sacrificing accuracy.

SocialSensor, which started in October 2011, is a three-year FP7 European Integrated Project aiming to tackle some of the challenges outlined above and to offer solutions as well as improvements. It is developing a new framework for enabling real-time multimedia indexing and search across multiple social media sources, introducing the concept of Dynamic Social COntainers (DySCOs), a layer of multimedia content organisation with particular emphasis on the real-time, social and contextual nature of content and information consumption. Through the proposed DySCO-centred media search, SocialSensor will integrate content mining, search and intelligent presentation in a personalised, context and network-aware way, based on aggregation and indexing of user-generated multimedia online content. It will be a single platform consisting of practical tools that incorporate novel user-centric media recommendation, visualisation, browsing and delivery methods.

The resulting multimedia search system is showcased and evaluated in two use cases: News and Infotainment. The news use case targets two user groups: (a) news professionals that are interested in leveraging social media content in their work; and (b) casual online and mobile news readers. The aim is to build applications that will provide these users a new way of discovering and accessing news information hidden in social media.  The infotainment use case targets individuals attending large events, such as festivals and expos. SocialSensor aims to build applications, with an emphasis on mobile, to help attendants organise their visits to such events by providing context-aware information. In addition, SocialSensor targets event organisers, aspiring to provide a social media sensing framework that can help capture the pulse of large events and gain valuable insights into their impact on visitors.

Providing real-time social indexing capabilities for these use cases is expected to have a transformational impact on both sectors. The subsequent chapters outline some of the societal and research challenges on which SocialSensor focuses, as well as some early results stemming from the use cases initial development phase.


2
RESEARCH CHALLENGES
We briefly outline below the main research challenges being tackled within SocialSensor. For a further description of some of the scientific results obtained from the conducted research, please refer to the respective articles of the E-Letter.


A. Social Media Mining
Capturing real-world events and gaining insights from the analysis of social content in large scale calls for new methods of mining social information and content streams. SocialSensor attempts to bring new mining techniques for discovering trends from social networking and media sharing sites, effectively managing the arrival of large, heterogeneous and evolving data. Various techniques are being developed, e.g. trending topic discovery [1], event detection [2], influencer and sentiment detection in social media.

B. Social Search and Retrieval

Social data are captured, represented, indexed and searched from social networking and media sharing sites and the Web to provide relevant, and context-aware results for multimedia and text content in real time using scalable indexing and aggregation approaches. The concept of DySCOs is central for social search due to the need for a means to organise information between the context-based search needs of information consumers and the indexing and aggregation capabilities of large-scale data stores. In particular, the large amount of media content calls for approaches that are capable of handling massive amounts of multimedia content, striking a balance between accuracy and complexity [3].

C. Semantic Middleware
To support a smooth user experience in real-time and mobile information access scenarios, the content delivery and quality of service aspects are also considered in the project framework. The semantic middleware of SocialSensor allows ad hoc networked users to seamlessly discover, compose and share semantically-relevant multimedia data and services. For this purpose, it includes components for (a) semantic peer-to-peer selection and composition planning of data services that are relevant to a given query; (b) semantic query answering over continuous streams of potentially inconsistent social data; and (c) intelligent caching, pre-fetching and Web-based sharing of data in ad hoc user groups [4].

D. User Modelling and Visualisation
Finally in the user modelling layer, in order to reflect different information needs of users, a user and context model is created for long-term interests and short-term activities profiling, and algorithms and tools for personalised information delivery and recommendations based on user feedback are developed [5].


3 THE SOCIALSENSOR FRAMEWORK
An overview of the conceptual architecture of SocialSensor in the highest level of abstraction is presented in Fig. 1, connecting the content sources, the main system components and the end users. The architecture has been consequently decomposed in lower levels of abstraction identifying the necessary modules and architectural principles of the SocialSensor system. As a result, an initial implementation of the platform has been achieved and will be made available as a set of inter-dependent open-source components (through: https://github.com/socialsensor). Central to the design is the concept of the Dynamic Social Container Object (DySCO) that serves as a social content aggregation, organisation and indexing structure. DySCOs and their attributes are created as a result of Sensor Mining methods. Indexing of DySCO fields and relations is a task for the Social Search component. Transfer, composition, and packaging of DySCOs take place in the Semantic Middleware. Querying and retrieval of DySCOs is taken care of by the Semantic Middleware and Social Search components. DySCO-based recommendation takes place in the context of Social Search making use of information coming from User Modelling.



Fig. 1. High-level view of SocialSensor architecture.


4 THE NEWS USE CASE
Traditionally, journalists gather information from news agencies, correspondents, interviews and their own research. Over the past few years, social media like Twitter, Facebook, and YouTube have become an additional and hugely important source of information for journalists and media organisations. This particularly refers to regions that are hard to access or situations that involve a high number of parties that are communicating via social media. Nevertheless, journalists have to deal with considerable difficulties and challenges when using social media. They need to monitor a variety of different platforms with different interfaces, log-in mechanisms and information presentation methods. They are overwhelmed by huge numbers of tweets, postings, images, videos and other content that is practically impossible to process in real time. They have problems assessing people’s attitudes and sentiments towards certain topics. And with each piece of important information they have identified, they are challenged with the difficult task of verifying this information as quickly as possible from other sources.

The SocialSensor vision is to create a single tool that quickly collects trusted material from social media - with context [6]. There are many tools available today that help filter and browse the Social Web. The SocialSensor news tool differs from previous tools, primarily because it is set up to support professional journalism. SocialSensor should enable journalists to see public opinion ‘in the raw’ and in real time as it develops around subjects, people and events. Beyond that, SocialSensor should help journalists collect the best user generated content coming from many sources. In addition, SocialSensor should provide the possibility of 'measuring truth'. Not in the sense of verifying content in an absolute sense, but to make it quick and simple for journalists to do so.

In summary, the goals of the news use case are to create tools that: (a) identifies and presents events and trends across social media sources in real time; (b) identifies key influencers and opinion leaders around events; (c) supports journalists in verifying user generated content (text, images, video and audio) from social media sources. To this end a first News Prototype has been implemented that enables searching/ browsing news items crawled from different social networking and media sharing sites in real-time over automatically discovered trending topics. Web analytics, sentiment scores for topics, and integrity checks of the content authors help users assess the importance of the emerging news. Results are presented in a single dashboard appropriate for use by the News professional (Fig. 2).

In the context of delivering tools that are useful to the News professional, the News use case developed the Alethiometer, a tool that computes the trust of Twitter content contributors, measuring the degree of truth behind tweets (Fig. 3). The user can see her Twitter timeline enhanced with tools for assessing the validity of each tweet, initiate an analysis for a tweet and view the author's scoring against reputation, history, popularity, influence and presence.


Fig. 2. Professional News dashboard.



Fig. 3.
Alethiometer screenshot.


5 THE INFOTAINMENT USE CASE

Since their inception, social media have served as an advertising and promotion opportunity for event organisers. As social media usage became more mainstream, event attendants also benefited from this new type of media, and socialising around upcoming, or even during events has become a favorite habit. Consequently, the SocialSensor Infotainment use case is targeting mainly two user groups:

(a) Event organizers.  These types of users are interested in the widest possible promotion of the event, by also using novel marketing methodologies that involve the usage of social media related technologies. Moreover, they are interested in monitoring several aspects of the event with the aim of leveraging its visibility, and consequently its revenues.

(b) Event participants. The audience consists of participants who use smartphones and social media, but also Web users and event visitors. The first are active social media users who interact with social media during the event, while the latter are rather passive users, who often check the event online news or just happen to be in the event premises and are keen on receiving event related information.

To satisfy the requirements of different types of users in Infotainment events, the SocialSensor Infotainment use case builds upon two research directions, the EventSense and the EventLive frameworks respectively.

A EventSense
For large-scale social events such as festivals, attended by large crowds of people, the amount of user generated content is constantly increasing as progressively more people use social media to express their opinion and sentiment, or share information about their participation to the event of interest. The large amount of content and its lack of structure make it difficult to gain an accurate view of the event. For example, since several sub-events occur within large-scale events (e.g. film screenings in the case of film festivals), it would be more informative for the end user to have the content organised on the basis of these sub-events. Similarly, the prevalence of redundancy among online comments and status updates makes it valuable to group together status updates that discuss the same topic. Moreover, as a consequence of the controversial nature of many event-related entities (e.g. films), a wide set of opinions and sentiments is expressed online by event participants. For the reasons above, the online representation of an event as a sequential list of posts and status updates is ineffective for conveying an objective view of the event. A more effective means of event representation would employ facets, such as entities, topics and sentiment to enable more effective information presentation and access.
To this end, SocialSensor develops EventSense, a social media sensing framework that can help event organisers and event enthusiasts capture the pulse of large events and gain valuable insights into their impact on their visitors. Online messages about the event are organised around entities of interest (e.g. films) and topics, and sentiment scores are extracted for each of those, by aggregating the sentiment expressed by individual messages. This kind of aggregation enables the ranking of entities, topics and online users based on social interest and disposition, and thus conveys a succinct and informative view of the event highlights.


Fig. 4. EventSense dashboard for the 53rd Thessaloniki Intl. Film Festival.


The aggregated information is communicated to the organisers in the form of an online dashboard (Fig. 4). Through a real-world evaluation on the 53rd Thessaloniki International Film Festival (TIFF53), it became evident that real-world event variables, such as film ratings, are correlated with aggregate statistics mined from the stream of online messages [7].


B EventLive
Recently it has become common for large scale infotainment events, be it film or music festival, large expo, sport or scientific event, to be supported by mobile apps. However, a competitive analysis of such applications showed that they currently offer very few intelligent features, while personalisation is often limited to merely allowing the user to create personalised event schedules.
SocialSensor envisions an intelligent real-time app that will incorporate advanced social media search and analysis features, intuitive visualisations and contextual recommendations, to deliver relevant and timely content to event attendants, and to ultimately leverage the users’ event experience. In general, the system will improve the following aspects of the event: (a) event-related multimedia content discovery and consumption; (b) within-event activity scheduling; (c) real-time social activity management; (d) contextual recommendation and mobile interaction with event items (e.g. films/trailers).
To showcase its Infotainment prototype, SocialSensor supports with mobile apps the Thessaloniki International Film and Documentary Festivals, and the Fête de la Musique Berlin event, a yearly music event that takes place in Berlin. SocialSensor built the official mobile apps (ThessFest iPhone/Android, Fete iPhone/Android) for these events (some snapshots are depicted in Fig. 5) with the aim to gather user requirements for the Infotainment prototype. In consequence, its first version was developed, featuring film recommendations based on similarly profiled users, Twitter-based sentiment scores for films, video streaming and recording, and advanced time-aware visualisations.


Fig. 5. Mobile application of the 15th Thessaloniki Documentary Festival.


6 SUMMARY
SocialSensor aspires to provide tools for real-time social media content indexing, search and delivery tailored around the needs of professional journalists, casual newsreaders, organisers and attendees of large infotainment events by investing in innovative analysis techniques of social data and content, assisted by effective indexing of real-time social media streams. As the project work progresses, many of the results will be made publicly available to the community in the form of open-source projects, libraries, datasets and applications.

Acknowledgment
This work is supported by the SocialSensor FP7 project, partially funded by the EC under contract number 287975.
 

REFERENCES

[1]     L. M. Aiello, G. Petkos, C. Martin, D. Corney, S. Papadopoulos, R. Skraba, A. Goker, Y. Kompatsiaris, A. Jaimes. “Sensing trending topics in Twitter”. In Transactions on Multimedia (to appear)

[2]     G. Petkos, S. Papadopoulos, Y. Kompatsiaris. “Social Event Detection using Multimodal Clustering and Integrating Supervisory Signals”. In Proceedings of ACM International Conference on Multimedia Retrieval (ICMR), Hong Kong, 2012

[3]     L. Mantziou, S. Papadopoulos, Y. Kompatsiaris. “Large-scale semi-supervised learning by approximate Laplacian Eigenmaps, VLAD and Pyramids”. Proc. 14th Intl. Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS 2013), Paris, France, July 2013

[4]     Y. Liu, J. Geurts, J.C. Point, S. Lederer, B. Rainer, C. Mueller,  C. Timmerer, H. Hellwagner, “Dynamic Adaptive Streaming over CCN: A Caching and Overhead Analysis” Proc. of IEEE International Conference on Communication (ICC 2013) - Symposium on Next-Generation Networking (Christopher Mattheisen, Tutomu Murase, eds.), Budapest, Hungary, 2013, pp. 2222-2226; IEEE Press.

[5]     H. Roitman, Y. Mass, I. Eiron, D. Carmel, “Modeling the Uniqueness of the User Preferences for Recommendation Systems”, SIGIR’13 poster, Jul 28-Aug 1, Dublin Ireland (to appear)

[6]     S. Diplaris, S. Papadopoulos, Y. Kompatsiaris, N. Heise, J. Spangenberg, N. Newman, H. Hacid, ”“Making Sense of it All”: An Attempt to Aid Journalists in Analysing and Filtering User Generated Content”, Workshop on Mining Social Network Dynamics (MSND 2012) – WWW 2012 Conference (WWW 2012), 16 April, Lyon, France

[7]     E. Schinas, S. Papadopoulos, S. Diplaris, Y. Kompatsiaris, Y. Mass, J. Herzig, L. Boudakidis, “EventSense: Capturing the Pulse of Large-scale Events by Mining Social Media Streams”, In Proceedings of PCI 2013, September 19-21, Thessaloniki, Greece (to appear)



Dr. Sotiris Diplaris
received his PhD from the the Department of Electric and Computer Engineering of the Aristotle University of Thessaloniki in 2010 and his Diploma from the Electronics and Computer Engineering Department of the Technical University of Crete in 2001. Since 2009 he has been working as research associate with the Information Technologies Institute, part of the Centre for Research and Technology Hellas (CERTH), on a wide range of research areas such as social media analysis and biomedical informatics. His research interests include large-scale social media content mining, emergent semantics extraction, image and speech processing.










Dr. Symeon Papadopoulos
received the Diploma degree in Electrical and Computer Engineering in the Aristotle University of Thessaloniki (AUTH), Greece in 2004. In 2006, he received the Professional Doctorate in Engineering (P.D.Eng.) from the Technical University of Eindhoven, the Netherlands. Since September 2006, he has been working as a research associate with the Information Technologies Institute (ITI), part of the Centre for Research and Technology Hellas (CERTH), on a wide range of research areas such as information search and retrieval, social network analysis, data mining and web multimedia knowledge discovery. In 2009, he completed a distance-learning MBA degree in the Blekinge Institute of Technology, Sweden. In 2012, he defended his Ph.D. thesis in the Informatics department of AUTH on the topic of large-scale knowledge discovery from social multimedia.









Dr. Ioannis (Yiannis) Kompatsiaris
is a Senior Researcher (Researcher B’) with the Information Technologies Institute / Centre for Research and Technology Hellas, Thessaloniki, Greece. His research interests include semantic multimedia processing, social media analysis, knowledge structures, reasoning and personalization for multimedia applications. He is the co-author of 57 papers in refereed journals, 30 book chapters, 7 patents and more than 170 papers in international conferences. He has been the co-organizer of various international conferences and workshops and has served as a regular reviewer for a number of journals and conferences. He is a Senior Member of IEEE and member of ACM.


Comments