[Air-L] Discussion of ChatGPT and other AIs and their uses
Wendy Norris
wnorris0 at naz.edu
Sat Mar 7 09:18:22 PST 2026
Hi Peter,
I lead an interdisciplinary ethical data science university program (Math,
Information Science, Philosophy and Sociology) and conduct research in
computational social science primarily in Python.
I want to offer some serious cautions about using generative AI apps for
programming/analytical work that are being raised in the Python and
cybersecurity networks that I am part of.
[1] There is growing anecdotal evidence that Python code outputs are
infected with malware and serious security vulnerabilities up and down the
code stack: poisoned source data, compromised libraries, deprecated code
with hidden security injections, agents that provide back door access, lack
of standardized security protocols, and more. These risks are irrespective
of the app source (Github, Claude, ChatGPT, etc.) and/or the model. Much of
the open source programming community is woefully
underprepared/underresourced to address these threats.
There has been some discussion that newer gen AI models are actually more
vulnerable to external manipulation because app outputs weigh newer source
material more heavily over the older pre-2023 knowledge base. Precisely
when the malware injection started taking root as app popularity grows.
This post from Lawfare.com offers a pretty comprehensive explanation of the
variety of risks:
https://www.lawfaremedia.org/article/when-the-vibe-are-off--the-security-risks-of-ai-generated-code
[2] Recent reporting on the integration of generative AI models for U.S.
military targeting and domestic surveillance makes quite clear that
professional and consumer use of these apps advances model training to
engage in activities that I morally reject. This is a tricky ethical area;
other folks may have different standards. But this is a red line for me.
This week, the U.S. Dept of Defense designated Anthropic as a supply chain
security risk in an attempt to force the company to allow the Pentagon to
use Claude models as the Trump regime sees fit use. Otherwise, the U.S.
government will force Anthropic's partners (Microsoft, Google, NVidia,
Palantir, etc.) to cancel their work with Anthropic or risk to their own
lucrative Pentagon contracts. The lack of contracts and access to data
centers, GPUs, and chips would effectively destroy Anthropic.
https://www.theguardian.com/technology/2026/mar/07/anthropic-claude-ai-pentagon-us-military
Poor quality code outputs that introduce technical, ethical, and cognitive
debt and environmental impacts aside, I would be very, very cautious about
generating "vibe code" from scraped data of questionable provenance and
known security risks.
You may want to explore cybersecurity websites to determine your risk
tolerance as these conversations don't seem to have escaped containment
beyond those who study/write about cyber security and military technologies.
Forewarned is forearmed,
Wendy
wendy norris, ph.d.
Associate Professor, Social Computing
Department of Mathematics + Data Science
Founding Faculty, Ethical Data Science program
Founding Faculty, Institute for Technology, AI + Society Interim Director,
Technology, AI & Society program
Peckham Hall, Room 212
Nazareth University <https://www2.naz.edu> | Rochester, NY USA
UTC-4 | ET
Be safe. Be kind. Make good trouble.
>
>
>
> ---------- Forwarded message ----------
> From: Peter Timusk <peterotimusk at gmail.com>
> To: air <air-l-aoir.org at listserv.aoir.org>
> Cc:
> Bcc:
> Date: Sat, 7 Mar 2026 02:17:10 -0500
> Subject: [Air-L] Discussion of ChatGPT and other AIs and their uses
> Hello I haven't really posted to this list for a decade or more I think. I
> work in statistical computing these days and rarely spend time on my
> employer's ( Statistics Canada) Internet Use surveys though I did write one
> short study in 2009.
>
> At work we do a lot of statistical computing and even analysts are
> expected to be able to calculate statistics for their own papers. We are
> moving off the statistical computing language SAS and going open source
> with R and Python. We are also expected to start exploring AI for use in
> making things more efficient and saving money.but also to do so with high
> ethical standards and of course working within our culture of citizen and
> business data privacy.
>
> I just wanted to write this email to see if there is any discussion
> possible here via email.
>
> I have used AI for the past few months to do some volunteer work. Here I am
> making an online collection of PDFs into a searchable set of blog posts on
> a Word Press site. I have learned that one can ask long search questions to
> ChatGPT and it will produce code examples and workflows, say to show me how
> to use R and the PDFTools an R package to extract text from these PDFs. I
> generally have to know what I am looking for to ask the questions but
> am happy so far with the results of this AI use. I also have to read the
> code and understand the loops and logic going on which allows my
> programming experience to validate the results.
>
> I am about to use Python suggested by ChatGPT to make an offline archive of
> all my Facebook posts from 2007 to now and also publish these in some neat
> fashion to a WordPress blog.
>
> anyways hoping to join the discussions on AI bias and ethics that have been
> going on for awhile.
>
More information about the Air-L
mailing list