[Air-l] ethnography and ethics

Thomas Koenig T.Koenig at lboro.ac.uk
Sun May 9 17:57:39 PDT 2004


Dear Charles Ess,

I am surprised that you and most other vocal members of this list I just 
joined are so categorically opposed to Eero's ideas, most of which I happen 
to agree with. As I am a new list member, I might reiterate some points 
other might have made in the past, so I would like to apologize in advance, 
if I raise some well known issues to the remainder of the group:

At 13:43 05/05/2004, you wrote:
>1.  The primary reason for worrying about ethics in research (and anywhere
>else) is to avoid harming people and violating their rights - including
>rights to privacy, confidentiality, anonymity, and informed consent (insofar
>as one follows medical and social science models: humanities disciplines
>have different approaches - but the basic framework of attending to persons'
>rights and avoiding harm still applies).

All these sound like very reasonable and important values and they 
certainly are -- in private settings.

I would like to repeat my assertion that most internet traffic (email and 
ftp excluded, and IRC with some reservations) is public. People, who 
publish on the usenet or the web are participating in public discourse, as 
do people on this listserv, which does not have a moderator. That is, they 
implicitly waive their right to privacy, when they publish through these 
channels. If we were to treat them as if they would require special 
protection, because they might not be aware that they are potentially 
publishing to an audience of a billion or so, that would violate other 
values, such as autonomy and responsibility for one's communicative 
actions. If people are below a certain age or mentally ill, their 
protection is, of course, paramount, but most internet participants are 
not. Likewise, if privacy in typically very private issues is falsely 
assumed (a listserv of victims of rape, e.g.), different rules apply. But 
to generally treat people on the net as if they would require protection, 
could also be construed as denying them the right to be considered full 
citizens/netizens.

What is more, to "reveal" one's identity, might in some cases be more 
harmful than to lurk or even to deceive. If you delurk, you are bound to 
change your data, which in turn might lead to underestimation of certain 
threats on the net. In some instances, for instance, when you observe the 
initiation of a crime, even deception might be ethically justified.

For the moment I would like set aside the question, who or what is to 
decide, if a specific research technique is "harmful" to the people 
observed. I will return to this issue below.


>2.  The danger with "obsession with results" is that it runs the risk of
>turning the human beings one is studying into means to one's own ends -
>i.e., for the sake of results, researchers may be tempted to ignore the fact
>that these are people they are interacting with, and whose rights they risk
>trampling.
>Stated still another way, if our only ethical guideline is  "the end
>(results) justifies the means" - we can justify everything from violating
>basic research ethics (more on this in a bit) to such things as medical
>experiments that intentionally cause harm to human beings (whether they are
>the African-American subjects of the Tuskeegee Institute study of the 1940s
>- or prisoners in Nazi and Japanese concentration camps).

I think hardly anybody would disagree with you on this point, but there are 
several crucial differences between those studies and ethnography on the web:

The first issue is power: Nazi scientists had physical power over their 
victims. The Tuskegee study did not use coercion AFAIK, but here, too, 
those abused in the study were recruited from the most disadvantaged strata 
of the population. In contrast, netizens typically come from the most 
advantaged strata, and, of course are not coerced into participation on the 
net. In fact, their participation is almost independent of any ethnographic 
research on the net, they do not get "recruited" into a study (unless it's 
a web survey).

The second issue is consequences: It seems to me a far stretch to compare 
bodily harm with some infringement of a falsely assumed privacy. Again, 
special cases, such as a listserv for rape victims and the like aside, by 
and large, I cannot see potential *major* harm. Indeed, I can see as much 
potential harm as I can see potential good from making semi-privacy fully 
public.

The third issue is legal status: Undoubtedly, Nazi science would violate 
the Universal Declaration of Human Rights and various national and 
international laws. My guess would be that the Tuskegee experiments would 
today be illegal in most countries, which are not dictatorships. Thus, 
there is no need for the research community to outlaw such practices, as 
they have already been outlawed by more democratically legitimated bodies.

A fourth, somewhat related, issue is responsibility: Without waiving 
researchers' responsibilities in these studies, I do think that both 
Concentration Camp experiments and the Tuskegee study first and foremost 
reflect the illegitimacy of the German state from 1933 to 1945 and flaws in 
the US-American system, respectively. These ethical issues that are so 
grave that they touch matters of life and death cannot and should not be 
regulated at the level of the academy, but at the level of the state and 
global humanity in the distant future.

Finally, the issue is indeed about results, read: anticipated effects, of 
these studies: The expected effects of the studies you mention are 
potential life or death questions. Lurking on the internet rarely leads to 
such potential questions, it is far more likely that the only "harm" you 
will do is to annoy some people.

>The _consequences_ of this are bad for researchers:  many chatrooms are
>basically now posted as "off-limits" to researchers.  And in a forthcoming
>study, the authors show a rather direct proportion between the size of a
>chatroom and its hospitality (better: lack thereof) to researchers as
>announced and unannounced.   The news here is not good for researchers
>thinking about lurking in smaller chatrooms, where the behaviors under study
>might be more interesting than in larger chatrooms: not surprisingly, the
>smaller the chatroom, the more people (rightly or wrongly) expect privacy
>and respect for privacy - and the angrier they get when they discover a
>researcher has been lurking among them.
>The point is that even for a pure consequentialist, violating basic rights
>to and expectations of privacy may be profoundly damaging to the possibility
>of future research.

This seems to me a vicious circle: You are arguing for a code of conduct 
that should enable research, which is effectively rendered impossible by 
that very code.

I am also unsure, what the legal status of chat room rules for "off-limit" 
research is. As long as chats are publicly accessible, I am pretty sure 
that these chats can be legally observed in almost all countries. So, even 
if some chat rooms post rules that "prohibit" researchers, subsequent 
researchers do not have to adhere to these rules. It may be impolite to 
break them, but rudeness has not been outlawed yet, for very good reasons.

>3.  (Next to finally), you ask a famous question:
> > And, of course, who should judge the ethics of another anyhow?
>The short answer to this question, of course, is: we are.  Like it or not,
>whether always right or wrong, human communities attempt to establish
>ethical standards and judge human behaviors by those standards.

This is the most interesting point. You are absolutely correct in that 
human communities require a code of conduct. The question, in my view, is 
who is *we*? I am very unease about academics writing their own rules of 
*ethics*, all too often that has backfired. To return to the Nazi example: 
Many German academics gladly dispensed of their Jewish colleagues, it 
seemed to them the ethical thing to do (see, e.g. Mark Lilla's beautifully 
scathing criticism of Heidegger[1]). Your Tuskegee example is another case 
in point: It obviously did not run against the scientific ethics at the 
time. Why should contemporary academics be better prepared for potential 
ethical pitfalls?

If we give up on the idea, that academics are experts in applied ethics, 
then the problem that ethics decided upon by academics are not 
democratically legitimated. Ulrich Beck has made that point with respect to 
the natural sciences: He critizised that many risk assessments are 
effectively made by scientists, who suggest, e.g., emission limits on the 
basis of some statistical calculations.[2] The *ethical* choice on the 
question, if one should or should not be able to research communications on 
the web covertly should or should not be solely left to the scientific 
community. And in fact, it hasn't, the AoIR guidelines point to a myriad of 
legal resources.

Don't get me wrong, I do think that the research community should make 
independent decisions about what is *science* or not. I am much less sure 
about decisions about, what is appropriate *ethics* in science, once this 
goes beyond professional courtesy and civil behavior.


4.  Finally:
> > is wanting to immerse oneself in
> > research as an active participant with the same "no rules" approach as
> > the other participants unethical and is it unacceptable to the broad
> > body of researchers?
> > Am I going to be lonely in my School of Unethical Research - members,
> > 1   :-)
>No offense intended - but for both strong deontological and consequentialist
>reasons - I would hope so.  Not because I wish you harm - but because I
>think human beings must be treated with respect, and I don't want to see
>future research jeopardized by current researchers behaving in ways that
>would (rightly) lead to anger and outrage.

I am unsure, if the guidelines Eero was proposing, namely to adhere to 
ethical guidelines the are derived from universalist humanism and the laws 
of democratic states, show of his disrespect for human beings.

I am also unsure that the elicitation of "anger and outrage" should be of 
paramount concern for social researchers. In fact, "anger and outrage" are 
part and parcel of much of the best of social research. If you were to 
investigate Fascists talking on the net, they certainly would not invite 
you gladly into their group. If you do a covert investigation, they surely 
will be angry and outraged, when you publish your study.

Respect for human beings in my view means also to treat them as autonomous 
individuals responsible for their actions.

In general, there seems frequently very little difference in the data on 
online communities, if one announces oneself as a researcher or not, as 
Mark has already mentioned. I am unsure, if the reason for these small 
effects is not merely the novelty effect of the net. I personally find it 
often useful to "reveal" my status (which, as I am currently looking at 
Anitisemitism on the net, probably has led to more "outrage" against my 
profession than if I had not :-)), but I cannot see any reason, why 
covert/lurking strategies should not be legitimate reserach strategies in a 
number of cases that do not involve particularly vulnerable perts of the 
population and/or are related to issues, where privacy would be of 
paramount importance.

Thomas


[1] Lilla, Mark: "The Reckless Mind," New York, NY: New York Review of 
Books 2001.
[2] Beck, Ulrich: "Risikogesellschaft," [Risk Society], Frankfurt a/M: 
Suhrkamp 1986.

-- 
thomas koenig
department of social sciences, loughborough university
http://www.lboro.ac.uk/research/mmethods/staff/thomas/index.html 





More information about the Air-L mailing list