[Air-L] using deep learning to characterize campaign television ads

kalev leetaru kalev.leetaru5 at gmail.com
Mon Feb 8 11:56:53 PST 2016


Apologies for cross-posting. Thought many of you would find of great
interest this latest experiment, which took the Internet Archive's TV
Political Ad Archive of 267 campaign ads airing on monitored television
stations over the last several months, split them into a sequence of
images, one per second, and ran them through Google's neural network Cloud
Vision API to catalog the visual contents of each frame including major
objects, activities, and themes it depicts, extract any recognizable text,
estimate the geographic location it captures, and identify the presence and
emotional expression of any human faces. Coupled with the live airing data
compiled by the Archive (http://politicaladarchive.org/) and the fact that
ads were analyzed in sequence every 1 second, you can do all kinds of
analyses, from which themes were aired the most and where to trends in the
sequencing of themes in ads.

A few high-level trends are summarized here:

https://www.washingtonpost.com/news/monkey-cage/wp/2016/02/08/what-does-artificial-intelligence-see-when-it-watches-political-ads/

The full JSON output capturing the data output by the Cloud Vision API for
each frame is here:

http://blog.gdeltproject.org/computers-watching-ads-deep-learning-meets-campaign-2016/

You can download the image frames here:

http://blog.gdeltproject.org/image-frames-available-for-political-ad-image-analysis-pilot/

~Kalev
http://www.kalevleetaru.com/
http://blog.gdeltproject.org/



More information about the Air-L mailing list