[Air-L] Tweet Secondary Media Uptake

Yair Fogel-Dror yair.fogel-dror at mail.huji.ac.il
Sun Aug 5 10:35:42 PDT 2018


Two additional APIs that may be relevant:
1. For searching historical Twitter data, you can use the OSoMe
<https://market.mashape.com/truthy/osome> api (they have about 10% of
historical Twitter, for academic usage).
Basically you can search for a string (or a hashtag), collect tweets ids,
and use them to collect the original tweets using Twitter API.

2. Last I checked, Lexis Nexis only stores one year of historical online
news articles (but maybe I am wrong about this).
One alternative you may consider is webhose
<https://docs.webhose.io/docs/query-examples>- it is a commercial service
but they have a free plan that may be sufficient. They also store
historical news articles from 2014 (though not in the free plan).
Haven't tried your use-case specifically, but I think you can search for
the entire tweet url (note that some articles use a shortened version of
the url so not sure this is the way to go), or just the hashtag with some
additional filtering.
If you choose to search for the hashtag, Lexis-Nexis / Factiva are also
relevant, of course.

Yair



On Sun, Aug 5, 2018 at 6:13 PM, Astvansh, Vivek <astvansh at iu.edu> wrote:

> 1. I GUESS you can search for the tweet URL if you have a news media
> article in a markup language (XML, HTML, etc.) format. However, if you use
> Factiva or LexisNexis Academic, you will download text (TXT, DOCX, PDF) and
> not markup language format files.
>
>
> 2. Although they differ in MINOR ways, I GUESS Factiva and LexisNexis
> Academic are substitutes. Also, they are far more (than web search)
> acceptable sources of media data.
>
>
> 3. I like your use of the label "secondary media." You seem to consider
> tweets by users as primary media and the appearance of those tweets in
> other nedua (as evidence for a narrative) as secondary media.
>
>
> I hope others respond so that we go beyond my GUESSES.
>
> ________________________________
> From: LIAM MONNINGER <lmonninger at ucla.edu>
> Sent: Sunday, August 5, 2018 10:56 AM
> To: Astvansh, Vivek
> Subject: Re: [Air-L] Tweet Secondary Media Uptake
>
> Thanks for the quick response Vivek!
>
> Yeah, that's essentially what I'd like to do. I was thinking there might
> be way to trace where a tweet is embedded via its URL. But, your method
> should certainly work for what I'm trying to accomplish!
>
> A couple of questions...
>
> 1) In your experience, is there a major difference between the results
> returned by Factiva vs. those returned by LexisNexis?
>
> 2) Methodologically, can I rely on the number of search results as figure
> for my secondary media audience? I.e., are the local search engines viable
> for this kind of research?
>
> Thanks,
> Liam Monninger
>
> On Sun, Aug 5, 2018 at 7:28 AM, Astvansh, Vivek <astvansh at iu.edu<mailto:
> astvansh at iu.edu>> wrote:
> Thank you for asking, Liam. Let me paraphrase my understanding of what you
> want. Given a topic (hashtag, or just a word or phrase), you'd first
> collect all the tweets on that topic in a given date range. For each of the
> tweet in your sample, you next want to obtain data on news media articles
> that refer to the focal tweet. Did I understand you correctly AND
> completely? If no, please help.
>
> If yes: you can obtain Twitter data from vendors such as Crimson Hexagon
> or GNIP. Next, you can write a program that searches Factiva or LexisNexis
> Academic for each tweet by text, username, etc.
> ________________________________________
> From: Air-L <air-l-bounces at listserv.aoir.org<mailto:air-l-bounces@
> listserv.aoir.org>> on behalf of LIAM MONNINGER <lmonninger at ucla.edu
> <mailto:lmonninger at ucla.edu>>
> Sent: Sunday, August 5, 2018 10:24 AM
> To: air-l at listserv.aoir.org<mailto:air-l at listserv.aoir.org>
> Subject: [Air-L] Tweet Secondary Media Uptake
>
> Hi everyone,
>
> My name is Liam Monninger. I'm an undergraduate researcher at UCLA, working
> on a project that seeks to understand tweets as commitment devices in
> international politics.
>
> To better understand the audience of a given tweet, I would like to be able
> to see and tabulate the secondary media where said tweet is embedded. I was
> thinking, in the very least, I could just make a list of major publications
> worldwide and manually check articles to see if a certain tweet is
> embedded. But, it would be nice to have a more sophisticated method.
>
> Any ideas? Is there perhaps a useful API?
>
> Thanks,
> Liam Monninger
> _______________________________________________
> The Air-L at listserv.aoir.org<mailto:Air-L at listserv.aoir.org> mailing list
> is provided by the Association of Internet Researchers http://aoir.org
> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/
> listinfo.cgi/air-l-aoir.org
>
> Join the Association of Internet Researchers:
> http://www.aoir.org/
> _______________________________________________
> The Air-L at listserv.aoir.org<mailto:Air-L at listserv.aoir.org> mailing list
> is provided by the Association of Internet Researchers http://aoir.org
> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/
> listinfo.cgi/air-l-aoir.org
>
> Join the Association of Internet Researchers:
> http://www.aoir.org/
>
> _______________________________________________
> The Air-L at listserv.aoir.org mailing list
> is provided by the Association of Internet Researchers http://aoir.org
> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/
> listinfo.cgi/air-l-aoir.org
>
> Join the Association of Internet Researchers:
> http://www.aoir.org/
>



More information about the Air-L mailing list