[Air-L] Psychological implications of AI
xdxd.vs.xdxd at gmail.com
Fri Oct 6 10:03:00 PDT 2017
and, first of all, thank you to those who have already contributed: much
and thanks for the questions: indeed they may help clarify:
> 1) What type of AI are you studying? There is no universally agreed upon
> conceptualization of AI. As a result, there are many different types of AI.
> There is the pursuit of "artificial general intelligence," which is what is
> often represented, or should I say misrepresented, in science fiction. Then
> there are applications that have spun out of these efforts or are developed
> for very specific contexts called "narrow AI." We have "narrow AI" and
> have made advances toward AGI, but do not have AGI or what was originally
> conceptualized as AI early within the field's foundation. Or, are you
> studying the "idea of AI."
> 2) It seems like you may be interested in how individuals respond to and
> behave toward specific AI, is this correct? Or are you talking at an
> organizational level, a societal level?
first of all, it may be helpful to specify: this research is for a task
force of Italy's agency for the digital agenda and which is dedicated to
understanding how to prepare Italy and its public administration to the
emergence of AI. The definition is intentionally left vague, because as you
can imagine there are more than a few people in Italy's public
administration (as in any administration) who would benefit to start from
scratch, from the definitions.
The prevalent orientation sees the task force really focused on those
seminal versions of AI which you can find in chatbots and similar software
artifacts, or in data processing techniques which provide some level of
decision support system, for example through predictive analysis of some
In a way, this also makes sense, as most of them are really focused on the
specifics of how you can use AI (any definition) to create better services
for citizens and decision support systems for policy makers and public
Instead, we are trying to broaden the spectrum of this activity.
>From our point of view, the wide availability of "artificial intelligence"
(any definition) in objects, services, places, processes of our daily lives
creates radical impacts on the ways in which we relate, learn, work,
communicate, inform and entertain ourselves, experience places, the
environment etc. These impacts and implications are ethical, economic,
organizational, and affect our freedoms and rights.
But first of all they are existential and psychological.
This is why, among the technological, economic, information, education,
ethics and other challenges, we have added a psychological one.
In this challenge we wish to address the mutation which is brought on to
the way in which we can imagine, experience, relate and act on the world
when artificial intelligence enters the scene, in any (all) of its possible
We think that this is important for multiple reasons.
One of them is that it seems like an important responsibility for a
government to support its citizens in socially constructing an imagination
for these (and other) important themes.
Reaching a social agreement in which it becomes clearer how human beings
wish to position themselves in this scenario, with what meanings and
implications, seems to us like a very important thing to support and
sustain for a government.
> If you can answer these questions, I know many people on this list can
> provide you more specific direction. And if you aren't sure of the answers
> to these questions, I have resources for that too.
This above is where this request comes from. There are lots of people who
are ready and skilled in suggesting all of the things you can do with AI.
Much fewer who are ready to embrace a discussion about what it could mean
and what sort of perceptive, cognitive, emotional, experiential,
existential mutation this would bring on to human beings.
McLuhan used to speak about the extension of the nervous system. De
Kerckhove used to speak about pychotechnologies and the "point of life".
Cyberneticists used to point out the mutual influence between technologies
and human beings: technologies make you just as much as you make
In a vision in which technology is not neutral.
Yes, you may use a hammer to plant a nail, or you can use it to smash it on
someone's head. The truth is that when you hold a hammer in your hand,
everything starts to look like a potential hammer.
This is the kind of investigation which we are looking for.
In the first phase there will be a review of existing research. In the next
ones experimentation might be employed.
> Best of luck,
> Andrea L. Guzman, Ph.D.
> Assistant Professor
> Dept. of Communication
> Northern Illinois University
> alguzman at niu.edu
> Message: 1
> Date: Thu, 5 Oct 2017 15:17:33 +0200
> From: "xDxD.vs.xDxD" <xdxd.vs.xdxd at gmail.com>
> To: List Aoir <air-l at listserv.aoir.org>
> Subject: [Air-L] Psychological implications of AI
> <CAJ=DDhpo938FoHCQ=4ZhbiQ20E3obacM4iNFzD+FCCBSeRc6Tw at mail.gm
> Content-Type: text/plain; charset="UTF-8"
> Hello everyone,
> I am looking for suggestions about books and articles which specifically
> deal with the psychological implications of Artificial Intelligence.
> There are many articles, books and web resources which deal with what AI
> can do, what are the dangers, the ethical issues etc
> But there are only a few which confront with the psychological impact on
> human beings when AI enters the scene.
> Can anybody suggest some titles/resources?
> The Air-L at listserv.aoir.org mailing list
> is provided by the Association of Internet Researchers http://aoir.org
> Subscribe, change options or unsubscribe at:
> Join the Association of Internet Researchers:
*[**MUTATION**]* *Art is Open Source *- http://www.artisopensource.net
*[**CITIES**]* *Human Ecosystems Relazioni* - http://he-r.i
*[**NEAR FUTURE DESIGN**]* *Nefula Ltd* - http://www.nefula.com
*[**RIGHTS**]* *Ubiquitous Commons *- http://www.ubiquitouscommons.org
Professor of Near Future and Transmedia Design at ISIA Design Florence:
More information about the Air-L