[Air-L] Content Analysis of Historical Tweet Set

Melissa Bliss melissa.bliss at qmul.ac.uk
Tue May 22 22:59:41 PDT 2018


Have you looked at MaxQDA? 
I am happily using it for a smaller video dataset. I noticed in the most recent upgrade it has increased its Twitter-handling capacity, though I do not have the details to hand. It may not be powerful enough but it has a free 14 day trial (and has the great benefit of being easy to use)

> On 23 May 2018, at 03:22, f hodgkins <frances.hodgkins at gmail.com> wrote:
> 
> All-
> I am working on a qualitative content analysis of a historical tweet set
> from CrisisNLP from Imran et al.,(2016).
> http://crisisnlp.qcri.org/lrec2016/lrec2016.html
> I am using the California Earthquake dataset. The Tweets have been stripped
> down to the Day/Time/ Tweet ID and the content of the Tweet. The rest of
> the Twitter information is discarded.
> 
> I am using is NVIVO- known for its power for content analysis --
> 
> However - I am finding NVIVO unwieldy for a data of this size (~250,000
> tweets). I wanted each unique Tweet to function as its own case. But -
> Nvivo would crash everytime.  I have 18G RAM and a Raid Array.
> I do not have a server - although I could get one.
> 
> I am working and coding side by side in Excel and in NVIVO with my data in
> 10 large sections of .csv files, instead of individual cases- and this is
> working (but laborious).
> 
> QUESTION:  Do you have any suggestions for software for large-scale content
> analysis of Tweets?  I Do not need SNA capabilities.
> 
> Thank you very much,
> Fran Hodgkins
> Doctoral Candidate (currently suffering through Chapter 4)
> Grand Canyon University
> USA




More information about the Air-L mailing list