Remember when you first saw Flickr? At the time social media was still pretty much a textual world, with visual glimpses here and there. Today, we’re posting 1.8 billion images on social media everyday. But it’s not just a matter of larger numbers.
The function of the image itself in the communication process has changed. Images are increasingly used as an independent mean of personal expression with the text associated to them shrinking more and more to a condensed caption (often packed with emojis and ironic hashtags) serving the ancillary role of providing some (relatively necessary) context.
Image-sharing platforms like Flickr have given visual communication a whole new avenue to find an audience and prosper. Products like Tumblr, Instagram and Pinterest have further developed the idea of using images as means of expression facilitating the emergence of a visual grammar to express personal opinions, social commentary or simply updating one’s status. Then Snapchat came along and engineered the entire product around the idea of conversations as visual banter rather as an exchange of “texts”.
In these visual conversations, images aren’t anymore just a mean to illustrate a textual content. The image is carrying 90% of the meaning and the text (when is there) is simply working as a qualifier, providing some context, speeding up comprehension, disambiguating interpretation.
What this means for research is that we can’t rely anymore on analysing the text to understand the image. In many cases we’re now dealing with the very opposite scenario: we have to analyse the image itself in order to understand what the (often sparse) text actually means.
And this is exactly what Pulsar Vision is set to do. In collaboration with our friends at IBM Watson, we’ve just launched a suite of deep learning tools to help you make sense of images in social media. Simply put, Pulsar now helps you understand the content and the context of a picture by analysing the picture itself.
Images from Instagram, Twitter, Tumblr, Flickr and any other visual channels are instantly analysed and tagged as they are collected with what Pulsar believe is the subject of the image. Tags can be as generic as “person”, “car”, “sunset”, “waterfall” but also as specific as “Arc de Triomphe”.
To facilitate the exploration of vast image datasets the new image tags are now available not only in the Results view but also as a treemap visualization in the Content section of the dashboard where Pulsar displays the most popular subjects in images shared by users across any of the social media channels you’re tracking. The map below for example is showing a breakdown of the images posted about Hyde Park in London.
Access to Pulsar Vision is free for all Pulsar users for the first 3 months of release (Feb 2016). Thereafter will be available as an add-on alongside other exciting new Artificial Intelligence modules we are integrating into the platform. More news on this very very soon.
If you’re already using Pulsar and want to learn more about our image analysis tool, please contact your account manager or email: Accounts@Pulsarplatform.com.
Alternatively, if you’re yet to experience the power of Pulsar and you’d like to set up a demo, email James.Cuthbertson@Pulsarplatform.com or call us on 020 7874 6577.