[Air-L] Discussion of ChatGPT and other AIs and their uses
Justin Ho
zjustin334 at gmail.com
Sat Mar 7 07:19:04 PST 2026
Thanks a lot for bringing this up. I think this is indeed a very important,
but also very complex, question. I have recently run a survey to ask about
the uses and concerns among communication scientists (but I believe the
insights are also relevant for other fields). Long story short, most of us
find using AI risky (for reasons already mentioned, among other things),
but most of us are using it, and at the same time most of us think there is
yet to be widely accepted norms on how to use it responsibly and ethically
(at least at the time when the survey was run). If you would like to know
more, my colleague has written a blogpost and we have uploaded the current
draft as a preprint (this is still under review so please take it with a
grain of salt).
https://medium.com/@hannes.cools/five-takeaways-from-the-first-of-its-kind-international-survey-about-the-use-of-generative-ai-in-d7ce74caabfe?postPublishedType=repub
Happy to discuss more!
Best regards,
Justin
On Sat, 7 Mar 2026 at 22:31, Charles Melvin Ess via Air-L <
air-l at listserv.aoir.org> wrote:
> Thanks to Peter for bringing this up - obviously critical discussion
> that has exploded in the literatures I'm still (somewhat) familiar with,
> especially over the last 5 years or so.
>
> I've responded to Peter privately, partly because I wanted to send him a
> copy of a paper that is about to be published. But here some of the
> additional response may be of more shared use or interest.
>
> I and a close relative have been active users of ChatGPT over the past
> couple of years - primarily in the domains of scripting and elementary
> programming. At one point, for example, I was tinkering with C programs
> running under 2.11 BSD on a PiDP-11 emulator / replica I built as part
> of my exploring all of this. My relative comes to all of this with 40+
> years of professional experience, including decades as a systems
> administrator / security expert.
>
> The short story is that the results for us have been decidedly mixed.
> As a start: as Peter rightly notes, everything depends on asking the
> right question / the question properly. Failure to do so can lead down
> some long, very frustrating, and ultimately useless, if not destructive
> rabbit holes. (Though the same can also often result from the right
> questions ...)
>
> It is also clear after a point what the underlying nature of these
> devices is - namely, sophisticated inference engines, totally devoid of
> any sort of semantic understanding, etc., that generate what I often
> call SWAGs - scientific (sorta) wild-ass guesses. (Less kindly, they
> are "stochastic parrots" that only can regurgitate the material they
> have been "trained" upon - so Bender et al, 2022.)
>
> It further becomes clear that we are often its guinea pigs / trainers:
> our responses to its suggestions - most especially then they go wrong as
> they so often do - are then taken up as part of its training data and,
> hopefully, helps improve it in some way or another for use further down
> the road.
>
> Last but not least - and no surprise: it is "trained" along many of the
> sophisticated lines of the various algorithms and related techniques to
> keep our "attention" that are familiar from SoMe. The most obvious
> examples are the occasional choices offered between two different
> "personalities." What my relative aptly calls the butt-kissing algorithm
> is also on strong display, along with other simple tricks - "you're so
> close, now," etc.
> (In other words: follow the money ...)
>
> I continue to use it - as informed and guided by this sort of awareness
> of its strengths and limits: most pointedly, not as a replacement for my
> own thinking / muddling through complex projects - including research
> and writing - but as a very limited augmentation.
>
> This is, as those who know me might suspect, further grounded in a
> fairly extensive philosophical framework and (relational / feminist /
> critical posthumanist ...) anthropology, one rooted (as a start - only
> as a start) in the Socratic dialogues as well as Aristotle: specifically
> Plato's The Phaedrus and the warnings there towards the end of the
> dialogue regarding the risks posed by the then-new media technology of
> writing.
> Contra a tendency in recent media and communication studies (from ca.
> 2012 forward, based on what I've managed to find) to interpret the story
> of Theuth and Ammon's warnings as a "media panic" - I rather show in a
> forthcoming paper via an ancient interpretive framework (ring
> composition / inclusio narratives) that this story serves first of all
> as a pedagogical device that helps bring home several of the lessons
> that unfold for the young Phaedrus cross the course of the dialogue.
>
> As a start: that "bare letters" alone only provide us with a _semblance_
> of wisdom and knowledge, and thereby present the risk of what Shannon
> Vallor has aptly characterized as "deskilling," our failing to acquire
> the practices and abilities ("virtues") necessary for acquiring such
> knowledge and wisdom on our own - most centrally, the capacity of
> phronesis as a form of self-correcting, specifically ethical judgment.
> What the moral panic reading tends to further miss is the subsequent
> discussion of "the gardens of letters" - the positive, proper uses of
> writing (including the central art of writing well vs. writing
> disgracefully which is thematic to the dialogue from the outset) as a
> complementary media technology requisite to the philosophical pursuit of
> knowledge and the capacity for good judgment as essential elements to
> good lives of flourishing.
>
> Such a complementary approach, last but not least, can be traced through
> early critiques of classic AI, including Joseph Weizenbaum and
> especially Hubert Dreyfus (1973) - who identifies Plato's Euthyphro as
> the beginning of the story of AI as it starts out with a request for a
> rule-based approach to ethics, which turns out to be futile: phronesis,
> as not reducible to rules, is required instead - including by AI
> programmers themselves, as Dreyfus further illustrates.
>
> In more technical terms: there arises here what becomes called the frame
> problem. The upshot is that such judgment is not computationally
> tractable - hence Weizenbaum's foundational distinction between judgment
> and calculation (1976).
>
> <Aside>: those familiar with the origins and development of the AoIR
> ethics guidelines for internet research may recognize that phronesis,
> contra rule-based approaches such as utilitarianism and deontology, has
> been central to our development and suggestions for use of these
> guidelines from the start.
> Another discussion entirely, but I have the very strong impression, from
> several sources and experiences over the past 2+ decades, that by
> grounding our guidelines on the already solid and fruitful phronesis /
> judgments of both new and more experienced researchers as a primary
> beginning point, coupled with the aim of building and using the
> guidelines in such a way as to foster our own phronetic abilities -
> rather than tick-boxing or, ho ho, asking ChatGPT ... - are primary
> contributors to their extensive and successful use. </aside>
>
> Still more recently, work by, e.g., Virginia Dignum (2019, 2021) and
> Katherina Zweig (2019) (a computer scientist and bioinformaticist,
> respectively, show similar limits to generative AI approaches -
> reiterating the point that AI / ML / LLMs offer only semblances of
> knowledge, and hence represent similar risks of deskilling should we
> turn more and more of our cognitive / writing / ethical, etc. loads over
> to the machines.
> All of this captures a central point from the Phaedrus - "know your
> tools," i.e., their strengths and limits, in order to then know how to
> best make appropriate use of them towards these larger ends.
>
> If any of this is of further interest - including those of us who have
> discussed the Phaedrus over the years here - I'm happy to send along the
> paper detailing all of this (forthcoming in the Danish Yearbook of
> Philosophy: Open Source). I conclude there with some examples of such a
> complimentary approach to using LLMs in contemporary research and
> teaching that might be helpful or suggestive in particular.
>
> Of course, critical comments and suggestions always welcome.
>
> Best of luck with it all and thanks again to Peter for raising these
> issues.
>
> - charles ess
> Professor Emeritus, University of Oslo
>
> On 07/03/2026 08:17, Peter Timusk via Air-L wrote:
> > Hello I haven't really posted to this list for a decade or more I think.
> I
> > work in statistical computing these days and rarely spend time on my
> > employer's ( Statistics Canada) Internet Use surveys though I did write
> one
> > short study in 2009.
> >
> > At work we do a lot of statistical computing and even analysts are
> > expected to be able to calculate statistics for their own papers. We are
> > moving off the statistical computing language SAS and going open source
> > with R and Python. We are also expected to start exploring AI for use in
> > making things more efficient and saving money.but also to do so with high
> > ethical standards and of course working within our culture of citizen and
> > business data privacy.
> >
> > I just wanted to write this email to see if there is any discussion
> > possible here via email.
> >
> > I have used AI for the past few months to do some volunteer work. Here I
> am
> > making an online collection of PDFs into a searchable set of blog posts
> on
> > a Word Press site. I have learned that one can ask long search questions
> to
> > ChatGPT and it will produce code examples and workflows, say to show me
> how
> > to use R and the PDFTools an R package to extract text from these PDFs. I
> > generally have to know what I am looking for to ask the questions but
> > am happy so far with the results of this AI use. I also have to read the
> > code and understand the loops and logic going on which allows my
> > programming experience to validate the results.
> >
> > I am about to use Python suggested by ChatGPT to make an offline archive
> of
> > all my Facebook posts from 2007 to now and also publish these in some
> neat
> > fashion to a WordPress blog.
> >
> > anyways hoping to join the discussions on AI bias and ethics that have
> been
> > going on for awhile.
> > _______________________________________________
> > The Air-L at listserv.aoir.org mailing list
> > is provided by the Association of Internet Researchers http://aoir.org
> > Subscribe, change options or unsubscribe at:
> http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
> >
> > Join the Association of Internet Researchers:
> > http://www.aoir.org/
>
> _______________________________________________
> The Air-L at listserv.aoir.org mailing list
> is provided by the Association of Internet Researchers http://aoir.org
> Subscribe, change options or unsubscribe at:
> http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>
> Join the Association of Internet Researchers:
> http://www.aoir.org/
>
More information about the Air-L
mailing list