[Air-l] follow-up on trolling post
Susan Herring
susan at ling.uta.edu
Sat Mar 23 10:10:55 PST 2002
FYI, the paper on trolling that I mentioned last week is now
available on the web as a downloadable postscript file from:
http://ella.slis.indiana.edu/~herring/trolling.ps
It is currently (as of March 2002) under consideration for
publication in The Information Society.
Please do not direct any further individual requests for the
paper to me, unless you have a problem with the URL. I'm happy
to get email from you with reactions to the paper, of course.
>A related term is "flame bait", which Netlingo.com defines as:
>An intentionally inflammatory posting in a newsgroup or discussion
>group designed to elicit a strong reaction thereby creating a flame war.
In our paper, I and my co-authors distinguish between "trolling"
and "flame bait" -- although the two often produce similar
outcomes -- in that trolling involves an element of disingenuity
or deception. The troll presents him/herself as interested in
legitimate discussion, when in fact their motives are fundamentally
uncooperative. The flamer need make no such pretense.
>>ps on trolls: Guillaume Latzko-Toth, who I think is still on this list,
>>gave an interesting paper at the Lawrence Kansas AIR which (iirc) talked
>>about the rise of software-agent trolls inside IRC, much more
>>sophisticated than the USENET agent-trolls which I think still roam to and
>>fro. A really neat topic for research: what happens when human and
>>non-human trolls interact, I wonder? Do they?
>Yup. Kibo and Serdar Argic bumped metaphorical heads more than once, if I
>recall correctly.(Assuming you accept the definition of kibo as a troll,
>which is arguable either way.) Plus, there's the people who would mention
>"turkey" in posts deliberately to trigger Serdar, which would be a human
>troll using a bot as its trolling agent.
Is any of this documented? I'd like to learn more about both sets of
observations. Thanks in advance to anyone who can provide specific
references.
In addition to the trolling research, I'm currently conducting research
on relevance violations (apparent non-sequiturs) in synchronous chat
interactions. My data include examples of humans interacting with bots,
where the "authenticity" of the bot's interaction (i.e., how human-
like it is) is assessed in terms of the relevance of its conversational
responses. Surprisingly, the bots' responses are often no more relevant
than those of the humans -- and this is not because the bots are
highly sophisticated. Rather the norms of interaction in the (human)
chat environments (IRC, MUDs, MOOs) involve what I call "loosened
relevance" -- which is a polite way of saying that a lot of what
the participants say doesn't relate to what was said before. This
evidence suggests that humans and bots might well interact (in
certain online contexts, in certain ways) such that the former
don't realize they are interacting with the latter. It seems
plausible to me that trolling is one of those behaviors for which
bots could pass as human, since trolling involves relevance violations
(think of Lachlan's more "off the wall" posts as recent examples).
Regards,
Susan
============================================
Susan C. Herring, Ph.D.
Associate Professor of Information Science
Adjunct Associate Professor of Linguistics
Fellow, Center for Social Informatics
Fellow, Center for Research on Learning and Technology
Indiana University, Bloomington
http://www.slis.indiana.edu/Faculty/herring/
============================================
More information about the Air-L
mailing list