[Air-l] Re: new media and shock
Elizabeth Van Couvering
e.j.van-couvering at lse.ac.uk
Mon Jul 11 06:30:23 PDT 2005
Hi Stefania,
I am already impressed - 200 websites is a phenomenal amount of
content.. We have also been struggling with this problem in our
research community and would be interested to know what you turn up.
So far, the difficult issues we have discussed have been:
1) freezing/archiving the state of the websites; or more generally
how one deals with change. this can range from a very difficult
problem if the website you are using is database-driven and/or
constantly updated (like a news website) or it can be more simply a
problem of how you download and reference what exists. Participants
on this list have raised questions about IPR (ie the legality of
copying), but I think the argument has been forcefully made that this
can be considered fair use. If, on the other hand, your website
contains the archived postings of members of the public and you want
to analyse their content, you are getting into a whole different
ethical kettle of fish.
2) documenting the structure of the website (ie its links) as a part
of its content. How should this be done? Then in terms of analysis,
understanding the place of each web page relative to the whole site.
for example, in most sites the home page gets by far the lion's share
of the hits. Is it therefore appropriate to analyse the homepage in
depth and subsidiary pages in less depth? I guess only your
theoretical framework can tell ;-)
3) understanding the visual design of the website with reference to
its content.
4) understanding the place of the website with regard to the rest of
the web, ie with link analysis or search term analysis...
I think if you just wanted to scrape text from each page and then run
it through, say, Atlas.ti to generate word lists or do a more
traditional content analysis such as is applied to media texts, then
that could be done. But I do think some of these other issues might
arise anyway.
Elizabeth
On 11 Jul 2005, at 13:03, <s.vicari at reading.ac.uk>
<s.vicari at reading.ac.uk> wrote:
> Hi,
>
> I am working on protest group online communication (thanks to the
> endless
> list of suggestions on this mailing list, I started using HTtrack!)
> and I am
> content analysing some 200 websites.
>
> So far, I have found quite a fragmented literature on content
> analysis of
> the Web. Would you flag out any specific work?
>
> Thanks!
> stefania
>
> Stefania Vicari
> PhD student in Sociology
> University of Reading
> PO Box 218,
> Reading, RG6 6AA,
> United Kingdom.
>
>
>
> _______________________________________________
>
>> The Air-l-aoir.org at listserv.aoir.org mailing list
>> is provided by the Association of Internet Researchers http://
>> aoir.org
>> Subscribe, change options or unsubscribe at:
>>
> http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>
>>
>> Join the Association of Internet Researchers:
>> http://www.aoir.org/
>>
>>
>
> _______________________________________________
> The Air-l-aoir.org at listserv.aoir.org mailing list
> is provided by the Association of Internet Researchers http://aoir.org
> Subscribe, change options or unsubscribe at: http://
> listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>
> Join the Association of Internet Researchers:
> http://www.aoir.org/
>
Elizabeth Van Couvering
PhD Student
Department of Media & Communications
London School of Economics and Political Science
http://personal.lse.ac.uk/vancouve/
e.j.van-couvering at lse.ac.uk
More information about the Air-L
mailing list