[Air-l] RE: FW: [chineseinternetresearch] National Science Foundation to help CIA spy on IRC chatrooms

Charles Ess ess at uni-trier.de
Mon Dec 6 06:50:31 PST 2004


> I am curious what the major issue of concern is here.
Good question, Allan - insofar as my previous posting in response to
Radhika's comment may be insufficient, and at the risk of some repetition...
> 
> Is it that the CIA is involved clandestinely?  While I agree that is
> somehat troublesome, the NSF has many portfolios in which the security
> community might (and probably does) take an active interest.  Are we
> surprised that there is some sort of "reverse technology transfer"?
> Should we be upset by this?
Whether or not "we" should be upset depends very much, of course, on your
ethical and legal sensibilities.
As I suggested in the previous note - from both ethical and legal
perspectives in Europe, such utilization of researchers would be greeted, I
think, far less charitably.   Generally speaking (always dangerous, but it's
a necessary starting point) - there is both ethically and legally a much
stronger presumption of privacy online (whether it is justified or not from
a technical perspective) and a much greater expectation that the state and
its laws exist to protect those rights.
Again, this is not to say that European states do not spy on their own
citizens in the name of fighting terrorism.  It is to say that were they to
do so - especially by way of ostensibly secret funding of ostensibly
scholarly research projects - I'm rather sure that the outcry would be
considerably greater than it appears to have been in the States.

This in turn is only to say - there are, in my mind, important ethical and
legal issues at stake here, ones that I think would be of considerable
interest to members of an organization devoted to Internet research - one
that has, to its credit, created the first international and
interdisciplinary set of ethical guidelines for Internet research, and which
seeks to foster genuinely global perspectives on these issues.
> 
> Alternatively, we could be upset at the monitoring of chat rooms.  I have
> little experience in this sort of research, but Camtempe, Krishnamoorthy
> and Yener[1] describe an interesting system that finds clusters of social
> activity.  But this  occurs on an *open* system.  There are
> countless ways to use Privacy Enhancing Technologies for group
> communication.  IRC is not only unencrypted, but the protocol was
> designed to enable lurking.  It's not even covert lurking though: the
> authors make no attempt to modify their IRC bot so that channel users are
> cannot see the surveilling bot.
Right - but as I noted to Radhika, from at least some ethical approaches
such as deontology, we would pay some attention to the expectations of
people, whether or not those expectations are fully justified.
More fully: as was discussed in the AoIR guidelines, drawing on the example
provided by Dag Elgesem: in Norway, people in public spaces do _not_ expect
to have audio or video-recordings made of them without their explicit
permission.  In the U.S., no one expects such protection of their privacy /
image / conversation in a public space.  And an American citizen might
argue, based on his/her cultural experience, that no such privacy protection
is justified by one's free participation in an open space.  But the point is
that this expectation is not just a matter of technical structures, but also
the cultural values and practices within those structures.
Again, part of my interest in posting the note was to see if it would
trigger discussion of _researchers_' ethical obligations to respect and
protect privacy, whether or not participants in chatrooms might expect such
privacy.

This also points to another, almost funny issue: yes, of course, we all know
there is no privacy in chatrooms, etc.  So how dumb are terrorists going to
be to try to use them - even with encryption - to discuss their next big
strike against the U.S.?

> 
> What is the privacy zone we should expect in open communication forums?
> If there is information to be gained by monitoring this sort of thing, why
> should any interested actor not exploit it?  Companies like Intelliseek
> are already moving to capture and quantify consumer buzz.  Whether they
> can successfully turn data into useful information is an interesting
> research and business strategy problem, but I am not sure that it is cause
> for concern.
Exactly the question and nicely put.  But again, this is in part a
culturally-variable matter.  While Americans and, so far as I have been able
to gather, Asians as a group tend to see little problem in developing
"useful" information from open communication forums ("useful" requires some
definition, however - and in any case, points to the utilitarian preferences
of mainstream U.S. culture) - Europeans and Scandinavians seem far more
interested in protecting personal information, whether or not it is floated
across an open system.
Beyond the interesting ethical questions these differences open up from an
intercultural perspective, these differences further create sometimes
intractable problems for researchers - first of all, E.U.-based researchers
who have collected personal information as part of their research are prima
facia forbidden by the E.U. privacy protection laws to transfer their data
to a third country whose privacy protection laws are less stringent and thus
would jeopardize the privacy and confidentiality of persons participating in
a research project.  In the first instance, that means that E.U. scholars
cannot collaborate with U.S. scholars, for example - and certainly not
scholars in Asia, where data privacy protection laws are still very young
and limited, if they exist at all.
So even if a U.S.-based researcher may legitimately argue that there is no
ethical problem with "using" data in some way from an open system - s/he may
find it difficult to collaborate with European colleagues unless / until
these issues of data privacy protection are cleared up.

> 
> Using info gleaned from chat rooms for warrants, or intentionally
> mapping from an online identity to a legal identity--these seem like Bad
> Ideas.  But is there a strong case for protecting an open system from
> passive surveillance?
In my view, there are several reasons to protect open systems - whether
online or offline - from surveillance. To summarize:
1. The rationale _for_ passive surveillance of an open system seems
extraordinarily weak: if we're after terrorists, do we really expect they'll
discuss their plans on an open system, and in ways that will be detectable
by algorithmic methods of data coding and categorizing?
I'm sure I'm missing something here, but there thus seems little strong
reason _for_ passive surveillance of an open system.  But it further seems
to me that the burden of proof _for_ surveillance of citizens should be
extraordinarily high, not extraordinarily low.

It further seems to me that there are strong reasons against such
surveillance.
2. Especially a deontological approach to research ethics would take into
account people's expectations, and seek to respect those far as possible,
beginning precisely with personal data/information.  Again, these
expectations are largely reflected in E.U. and Scandinavian research codes
and laws - i.e., the fact that they are weaker in the U.S. does not
automatically mean that the U.S. position is the "right" one; it may be
mistaken in important ways (as I believe it is).
3.  These contrasts lead to problems for U.S. based researchers who might
utilize data from such surveillance, at least if they want to collaborate
with colleagues outside their borders.
4.  From the standpoint of classic arguments _for_ privacy - several
philosophers in the modern period argue that privacy is an instrumental
good, i.e., valuable because it is needed to develop a sense of self,
intimate relationships, and, politically, for the sake participating in a
democratic society.  In particular, surveillance, by contrast - even in open
environments - is generally thought to have a chilling effect on dissent,
dialogue, etc.  (You don't have to read postmodernists on this point, but it
helps...)
So especially if we're interested in furthering the use of communication
systems for democratic societies, we might strongly object to passive
surveillance of even open systems.
(Deborah Johnson has a classic article in the excellent anthology, "Readings
in Cyberethics" edited by Spinello and Tavani, which summarizes these points
nicely)

The last point should be read to say:  given the central importance of
protecting privacy, etc., for the sake of individual development,
relationships, and democratic polity, then _if_ we want to introduce
something like passive surveillance of open systems - the burden of proof
for doing so should be extraordinarily high.
Again, I don't see that the burden of proof has been met.

But in any event, I hope this provides a positive and helpful response to
what I take to be exactly the right questions?

In all events, cheers,

Charles Ess
Fall '04: Fulbright Senior Scholar
Universität Trier 
Fachbereich II
Fakultäten der Medienwissenschaft, Sinologie
Universitätsring 15
54296 Trier (Germany)
Office phone: (49) (0)651-201-3744
 Sekretariat: (49) (0)651-201-3203
         Fax: (49) (0)651-201-3741

Distinguished Research Professor, Interdisciplinary Studies
Drury University
900 N. Benton Ave.              Voice: 417-873-7230
Springfield, MO  65802  USA       FAX: 417-873-7435

Home page:  http://www.drury.edu/ess/ess.html
Co-chair, CATaC: http://www.it.murdoch.edu.au/catac/

Exemplary persons seek harmony, not sameness. -- Analects 13.23






More information about the Air-L mailing list