[Air-L] Air-L Digest, Vol 162, Issue 14 Readings on internet 'addiction'? (Sanjay Sharma)
Katrin Tiidenberg
katrin.tiidenberg at gmail.com
Sun Jan 14 08:27:01 PST 2018
Dear Sanjay,
Annette Markham, me and a group of students worked on how people make sense of their own experiences and which popular discourses they (re)produce or push back on. The addiction narrative is addressed centrally. Available here https://dl.acm.org/citation.cfm?id=3097307 <https://dl.acm.org/citation.cfm?id=3097307> happy to send if you can’t access.
best,
Kat Tiidenberg
Katrin Tiidenberg, PhD
Aarhus University / Tallinn University
kkatot.tumblr.com <http://kkatot.tumblr.com/>
> On Jan 13, 2018, at 4:17 AM, air-l-request at listserv.aoir.org wrote:
>
> Send Air-L mailing list submissions to
> air-l at listserv.aoir.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
> or, via email, send a message with subject or body 'help' to
> air-l-request at listserv.aoir.org
>
> You can reach the person managing the list at
> air-l-owner at listserv.aoir.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Air-L digest..."
>
>
> Today's Topics:
>
> 1. Readings on internet 'addiction'? (Sanjay Sharma)
> 2. Any sociological or STS research on machine learning?
> (Yosem Companys)
> 3. Re: research ethics again - students and FB (Dan L. Burk)
> 4. Re: Any sociological or STS research on machine learning? (Sally)
> 5. Re: research ethics again - students and FB (Dan L. Burk)
> 6. Re: WEBCAST FRIDAY: Digital Preservation: Policy Challenges
> (with Vint Cerf) (Joly MacFie)
> 7. Re: Any sociological or STS research on machine learning?
> (Jenna Burrell)
> 8. Re: Any sociological or STS research on machine learning?
> (Roberge Jonathan)
> 9. Re: research ethics again - students and FB (Peter Timusk)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 12 Jan 2018 14:19:06 +0000
> From: Sanjay Sharma <Sanjay.Sharma at brunel.ac.uk>
> To: "air-l at listserv.aoir.org" <air-l at listserv.aoir.org>
> Subject: [Air-L] Readings on internet 'addiction'?
> Message-ID:
> <VI1PR0102MB28457444DF89A740DADC86E3C0170 at VI1PR0102MB2845.eurprd01.prod.exchangelabs.com>
>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello All
>
>
> I'm putting together a new UG social media module, and after eliciting feedback from students, there's interest in internet 'addiction'.
>
>
> In the past, I shun away from these kinds of discussions about 'addiction', as they can be empirically dubious, with simplistic - psychological, rather than sociological - notions of subjectivity and behaviour. Although now that (controversially) the WHO may be classifying gaming disorder as a mental health condition, it struck me that the topic does need some serious attention. And more generally, there are seemingly convincing claims about how the design of social media platforms and apps exploit our attention, e.g. https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia
>
>
> Are there critically informed academic resources/readings the hive-mind of AoIR would recommend on this topic? (I'm happy to compile and re-circulate to the mailing-list).
>
>
> Thanks
>
> Sanjay
>
>
> Dr Sanjay Sharma
> Department of Social & Political Sciences (MJRD156)
> College of Business, Arts & Social Sciences
> Brunel University
> http://www.brunel.ac.uk/people/sanjay-sharma<http://www.brunel.ac.uk/sss/sociology/staff-profiles/sanjay-sharma>
> darkmatter Journal<http://www.darkmatter101.org/> editor
> twitter: @sanjay_digital<https://twitter.com/sanjay_digital>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 12 Jan 2018 08:29:08 -0800
> From: Yosem Companys <ycompanys at gmail.com>
> To: "Science & Technology Studies" <STS at nic.surfnet.nl>,
> STSGRAD at googlegroups.com, AIR <air-l at aoir.org>
> Cc: Stephen Paff <stephen.paff at gmail.com>
> Subject: [Air-L] Any sociological or STS research on machine learning?
> Message-ID:
> <CANhci9GiiFD3STAC7EvPBCaGt6a=etP4CUXb_ZUKpBU1h_OZ-g at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> From: Stephen Paff <stephen.paff at gmail.com>
>
> Hello everyone,
>
> I am conducting research into the anthropology of machine learning. Does
> anyone know of ethnographies of the development, implementation, and/or use
> of machine learning algorithms? Are there any sociologists, STS
> researchers, or scholars from other related fields studying machine
> learning whose work I should look into as well?
>
> Sincerely,
> Stephen Paff
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 12 Jan 2018 08:41:36 -0800
> From: "Dan L. Burk" <dburk at uci.edu>
> To: Sohail Dahdal <sohail.dahdal at gmail.com>
> Cc: "Christopher J. Richter" <crichter at hollins.edu>, "Charles M. Ess"
> <c.m.ess at media.uio.no>, air-l <air-l at listserv.aoir.org>
> Subject: Re: [Air-L] research ethics again - students and FB
> Message-ID: <b21bf7b536c5aaa36bac34ad3388a7a9 at uci.edu>
> Content-Type: text/plain; charset=US-ASCII
>
> Well, as I mentioned before, this is not really my circus. Most of what
> I know about research ethics comes from the biomedical area. My sense
> is that the principles are similar. In that context, it is perfectly
> fine to ask subjects to take risks as long as you have informed consent
> regarding the risks and benefits -- assuming that principle holds, the
> issue wouldn't be the risk, it seems to me it would be the subjects
> knowledge and acceptance of the risk.
>
> There is in Charles' problem the wrinkle of the subjects being minors,
> which somewhat bifurcates the consent question -- on autonomy
> principles, there is the subjects' own acceptance, which is a function
> of actual age and capacity (i.e., a seventeen year old may be perfectly
> capable of understanding the implications of the experiment, a seven
> year old probably not). And then there is the separate question of
> consent by the minors' guardians or responsible party as agent.
>
> Cheers, DLB
>
> On 2018-01-11 21:45, Sohail Dahdal wrote:
>
>> Asking your research subjects to do ethical illegal actions could be either 'ethical research' or not depending on any risk that you might expose your subject to...
>>
>> In that sense highly ethical actions could actually be highly unethical research.
>>
>> In the case of FB fake accounts, you have to ask, about the risk to your subject including the risk of forming bas habits not just the legal implications.
>>
>> Prof Sohail Dahdal,
>> American University of Sharjah
>>
>> On 12 Jan 2018, at 3:30 am, Dan L. Burk <dburk at uci.edu> wrote:
>>
>> So we seem to agree on your second statement.
>>
>> Regarding the first: suppose that Charles designs a study that asks
>> minors (or really anyone) to engage in civil disobedience. Perhaps he
>> asks them to trespass, with a risk of arrest and an arrest record. For
>> good reasons, like saving baby seals or giving persons of color seats at
>> the lunch counter or something.
>>
>> Definitely illegal. But also highly ethical behavior on the part of the
>> study subjects.
>>
>> ls Charles behaving unethically in asking them to behave highly
>> ethically but illegally?
>>
>> Not really my rodeo, but I strongly suspect that the behavior is ethical
>> all the way down.
>>
>> Cheers, DLB
>>
>> On 2018-01-10 22:07, Christopher J. Richter wrote:
>>
>> Ah, but the question is not whether it is ethical for the minors to violate a (for them) non-binding agreement, but whether it is ethical for the presumably adult researcher to require it of them. And just because something is legal, that does not make it ethical.
>>
>> Christopher J. Richter, Ph.D.
>> Associate Professor, Communication Studies
>> Hollins University
>> Roanoke VA, USA
>>
>> On Jan 10, 2018, at 11:37 PM, Dan L. Burk <dburk at uci.edu> wrote:
>>
>> So, although I am not saying that the study design is ethical, or even necessarily a good idea, I would most definitely take issue with either the specific assertion that violating an adhesion contract is always unethical (it is called an adhesion contract for good reason), and with the more general assertion that violations of law are always unethical.
>>
>> Also, non-trivially, the assertion is a non-sequitur: minors generally can't enter into binding contracts, so there is by definition no contract for them to violate.
>>
>> None of that means you should go ahead and do it; only that if you decline to do so, it should be for some other reasons.
>>
>> Cheers, DLB
>>
>> Dan L. Burk
>> Chancellor's Professor of Law
>> University of California, Irvine
>> ++++++++++++++++++++++++++++++++
>> 2017-18 Fulbright Cybersecurity Scholar
>> <b975a236.gif>
>>
>> On 2018-01-10 09:28, Christopher J. Richter wrote:
>> Dear Charles,
>>
>> TOS agreements are most often legally binding. Requiring minors (indeed any study participant, but especially minors) to violate a legal contract, whether online or off, is unethical on the face of it.
>>
>> Then there is the issue of deception, of whom and how interactions on the fake accounts are deceiving. Deception, by definition, undermines informed consent. Will those who are deceived be debriefed? If not, it's problematic.
>>
>> Christopher J. Richter, Ph.D.
>> Associate Professor, Communication Studies
>> Hollins University
>> Roanoke VA, USA
>>
>> On Jan 10, 2018, at 4:44 PM, Charles M. Ess <c.m.ess at media.uio.no> wrote:
>>
>> Dear AoIRists,
>>
>> What are your thoughts regarding the following?
>>
>> A research project involves a small number of students, legally minors - and requires that they set up fake FB accounts for the sake of role-playing in an educational context?
>> Of course, fake accounts are a clear violation of the FB ToS.
>>
>> I know we've discussed the ethics of researchers doing this (with mixed results, i.e., some for, some concerned).
>>
>> But I'm curious what folk think / feel about this version of the problem.
>>
>> Many thanks in advance,
>> - charles
>> --
>> Professor in Media Studies
>> Department of Media and Communication
>> University of Oslo
>> <http://www.hf.uio.no/imk/english/people/aca/charlees/index.html>
>>
>> Postboks 1093
>> Blindern 0317
>> Oslo, Norway
>> c.m.ess at media.uio.no
>> _______________________________________________
>> The Air-L at listserv.aoir.org mailing list
>> is provided by the Association of Internet Researchers http://aoir.org [1 [1]]
>> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org [2 [2]]
>>
>> Join the Association of Internet Researchers:
>> http://www.aoir.org/ _______________________________________________
>> The Air-L at listserv.aoir.org mailing list
>> is provided by the Association of Internet Researchers http://aoir.org [1 [1]]
>> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org [2 [2]]
>>
>> Join the Association of Internet Researchers:
>> http://www.aoir.org/
>> --
>>
>> --
>> Dan L. Burk
>> Chancellor's Professor of Law
>> University of California, Irvine
>> ++++++++++++++++++++++++++++++++
>> 2017-18 Fulbright Cybersecurity Scholar
>>
>> Links:
>> ------
>> [1] http://aoir.org
>> [2] http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>> <b975a236.gif>
>
> --
> Dan L. Burk
> Chancellor's Professor of Law
> University of California, Irvine
> ++++++++++++++++++++++++++++++++
> 2017-18 Fulbright Cybersecurity Scholar
>
>
> Links:
> ------
> [1] http://aoir.org
> [2] http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>
> ------------------------------
>
> Message: 4
> Date: Fri, 12 Jan 2018 08:42:25 -0800
> From: Sally <sally at sally.com>
> To: Yosem Companys <ycompanys at gmail.com>
> Cc: Science & Technology Studies <STS at nic.surfnet.nl>,
> STSGRAD at googlegroups.com, AIR <air-l at aoir.org>, Stephen Paff
> <stephen.paff at gmail.com>
> Subject: Re: [Air-L] Any sociological or STS research on machine
> learning?
> Message-ID: <862F3EB4-BF35-4EB2-8F77-052E0F955355 at sally.com>
> Content-Type: text/plain; charset=us-ascii
>
> Hi Stephen,
>
> We look at automation in general from an anthropological perspective. This includes AI, ML, and applied algorithms.
>
> Pubs at
>
> http://www.posr.org/wiki/publications
>
> Sally
>
> Sally Applin, Ph.D.
> ..........
> Research Fellow
> University of Kent, Canterbury, UK
> School of Anthropology and Conservation
> Centre for Social Anthropology and Computing
> ..........
> Research Associate
> Human Relations Area Files (HRAF)
> Yale University
> ..........
> Associate Editor, IEEE Consumer Electronics Magazine
> Member, IoT Council
> Executive Board Member: The Edward H. and Rosamond B. Spicer Foundation
> ..........
> http://www.posr.org
> http://www.sally.com
> I am based in Silicon Valley
>
>> On Jan 12, 2018, at 8:29 AM, Yosem Companys <ycompanys at gmail.com> wrote:
>>
>> From: Stephen Paff <stephen.paff at gmail.com>
>>
>> Hello everyone,
>>
>> I am conducting research into the anthropology of machine learning. Does
>> anyone know of ethnographies of the development, implementation, and/or use
>> of machine learning algorithms? Are there any sociologists, STS
>> researchers, or scholars from other related fields studying machine
>> learning whose work I should look into as well?
>>
>> Sincerely,
>> Stephen Paff
>> _______________________________________________
>> The Air-L at listserv.aoir.org mailing list
>> is provided by the Association of Internet Researchers http://aoir.org
>> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>>
>> Join the Association of Internet Researchers:
>> http://www.aoir.org/
>>
>
>
> ------------------------------
>
> Message: 5
> Date: Fri, 12 Jan 2018 08:48:35 -0800
> From: "Dan L. Burk" <dburk at uci.edu>
> To: "Christopher J. Richter" <crichter at hollins.edu>
> Cc: Sohail Dahdal <sohail.dahdal at gmail.com>, "Charles M. Ess"
> <c.m.ess at media.uio.no>, air-l <air-l at listserv.aoir.org>
> Subject: Re: [Air-L] research ethics again - students and FB
> Message-ID: <568d877ff6f9097e48292fbd067f35bc at uci.edu>
> Content-Type: text/plain; charset=US-ASCII
>
> See my comment to Sohail, immediately preceeding. I worry again about
> conflating the legal status of the minors with the ethics of their
> participation.
>
> (And I assume the point of creating pseudonomous Facebook accounts is so
> that they *wont* be denied social media access in the future!)
>
> Cheers, DLB
>
> On 2018-01-11 23:57, Christopher J. Richter wrote:
>
>> As Sohail Dahdal clarifies, the ethics of human subject research turn not on whether the actions subjects are directed to do are laudable, but whether the research involves risk of harm to the participants, in this case, minors.
>>
>> Risks for the participants of the study as proposed might also include that of being denied social media service in future, which for some folks I know would be devastating!
>>
>> As for Dan's hypothetical, yes, requiring participants to face risk of arrest (or of being handcuffed, which can be terrifying and actually can hurt, or tear gassed, or billy clubbed or even shot--sometimes the degree of real world risk is difficult to assess) is unethical, especially if, as minors, they are not legally capable of making decisions about the risk themselves.
>>
>> Christopher J. Richter, Ph.D.
>> Associate Professor, Communication Studies
>> Hollins University
>> Roanoke VA, USA
>>
>> On Jan 12, 2018, at 7:46 AM, Sohail Dahdal <sohail.dahdal at gmail.com> wrote:
>>
>> Asking your research subjects to do ethical illegal actions could be either 'ethical research' or not depending on any risk that you might expose your subject to...
>>
>> In that sense highly ethical actions could actually be highly unethical research.
>>
>> In the case of FB fake accounts, you have to ask, about the risk to your subject including the risk of forming bas habits not just the legal implications.
>>
>> Prof Sohail Dahdal,
>> American University of Sharjah
>>
>> On 12 Jan 2018, at 3:30 am, Dan L. Burk <dburk at uci.edu> wrote:
>>
>> So we seem to agree on your second statement.
>>
>> Regarding the first: suppose that Charles designs a study that asks
>> minors (or really anyone) to engage in civil disobedience. Perhaps he
>> asks them to trespass, with a risk of arrest and an arrest record. For
>> good reasons, like saving baby seals or giving persons of color seats at
>> the lunch counter or something.
>>
>> Definitely illegal. But also highly ethical behavior on the part of the
>> study subjects.
>>
>> ls Charles behaving unethically in asking them to behave highly
>> ethically but illegally?
>>
>> Not really my rodeo, but I strongly suspect that the behavior is ethical
>> all the way down.
>>
>> Cheers, DLB
>>
>> On 2018-01-10 22:07, Christopher J. Richter wrote:
>>
>> Ah, but the question is not whether it is ethical for the minors to violate a (for them) non-binding agreement, but whether it is ethical for the presumably adult researcher to require it of them. And just because something is legal, that does not make it ethical.
>>
>> Christopher J. Richter, Ph.D.
>> Associate Professor, Communication Studies
>> Hollins University
>> Roanoke VA, USA
>>
>> On Jan 10, 2018, at 11:37 PM, Dan L. Burk <dburk at uci.edu> wrote:
>>
>> So, although I am not saying that the study design is ethical, or even necessarily a good idea, I would most definitely take issue with either the specific assertion that violating an adhesion contract is always unethical (it is called an adhesion contract for good reason), and with the more general assertion that violations of law are always unethical.
>>
>> Also, non-trivially, the assertion is a non-sequitur: minors generally can't enter into binding contracts, so there is by definition no contract for them to violate.
>>
>> None of that means you should go ahead and do it; only that if you decline to do so, it should be for some other reasons.
>>
>> Cheers, DLB
>>
>> Dan L. Burk
>> Chancellor's Professor of Law
>> University of California, Irvine
>> ++++++++++++++++++++++++++++++++
>> 2017-18 Fulbright Cybersecurity Scholar
>> <b975a236.gif>
>>
>> On 2018-01-10 09:28, Christopher J. Richter wrote:
>> Dear Charles,
>>
>> TOS agreements are most often legally binding. Requiring minors (indeed any study participant, but especially minors) to violate a legal contract, whether online or off, is unethical on the face of it.
>>
>> Then there is the issue of deception, of whom and how interactions on the fake accounts are deceiving. Deception, by definition, undermines informed consent. Will those who are deceived be debriefed? If not, it's problematic.
>>
>> Christopher J. Richter, Ph.D.
>> Associate Professor, Communication Studies
>> Hollins University
>> Roanoke VA, USA
>>
>> On Jan 10, 2018, at 4:44 PM, Charles M. Ess <c.m.ess at media.uio.no> wrote:
>>
>> Dear AoIRists,
>>
>> What are your thoughts regarding the following?
>>
>> A research project involves a small number of students, legally minors - and requires that they set up fake FB accounts for the sake of role-playing in an educational context?
>> Of course, fake accounts are a clear violation of the FB ToS.
>>
>> I know we've discussed the ethics of researchers doing this (with mixed results, i.e., some for, some concerned).
>>
>> But I'm curious what folk think / feel about this version of the problem.
>>
>> Many thanks in advance,
>> - charles
>> --
>> Professor in Media Studies
>> Department of Media and Communication
>> University of Oslo
>> <http://www.hf.uio.no/imk/english/people/aca/charlees/index.html>
>>
>> Postboks 1093
>> Blindern 0317
>> Oslo, Norway
>> c.m.ess at media.uio.no
>> _______________________________________________
>> The Air-L at listserv.aoir.org mailing list
>> is provided by the Association of Internet Researchers http://aoir.org [1 [1]]
>> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org [2 [2]]
>>
>> Join the Association of Internet Researchers:
>> http://www.aoir.org/ _______________________________________________
>> The Air-L at listserv.aoir.org mailing list
>> is provided by the Association of Internet Researchers http://aoir.org [1 [1]]
>> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org [2 [2]]
>>
>> Join the Association of Internet Researchers:
>> http://www.aoir.org/
>> --
>>
>> --
>> Dan L. Burk
>> Chancellor's Professor of Law
>> University of California, Irvine
>> ++++++++++++++++++++++++++++++++
>> 2017-18 Fulbright Cybersecurity Scholar
>>
>> Links:
>> ------
>> [1] http://aoir.org
>> [2] http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>> <b975a236.gif>
>
> --
> Dan L. Burk
> Chancellor's Professor of Law
> University of California, Irvine
> ++++++++++++++++++++++++++++++++
> 2017-18 Fulbright Cybersecurity Scholar
>
>
> Links:
> ------
> [1] http://aoir.org
> [2] http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>
> ------------------------------
>
> Message: 6
> Date: Fri, 12 Jan 2018 12:19:57 -0500
> From: Joly MacFie <joly at punkcast.com>
> To: aoir list <air-l at aoir.org>
> Subject: Re: [Air-L] WEBCAST FRIDAY: Digital Preservation: Policy
> Challenges (with Vint Cerf)
> Message-ID:
> <CAM9VJk2msi0of4VZQJXG-x=1fKq9=WwvXYutMcReOJt6YhwSFg at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> This is just kicking off.
>
> On Tue, Jan 9, 2018 at 3:10 PM, Joly MacFie <joly at punkcast.com> wrote:
>
>> This is a first for ISOC-NY! An event in Washington DC! But don't worry,
>> there will be a follow up in NYC on Feb 5. If you can make it into Google
>> in DC on Friday, there is still plenty of room, otherwise please join the
>> webcast. You can register for either on the eventbrite or facebook event
>> pages. See links below.
>>
>>
>> ?
>>
>> [image: livestream] <https://livestream.com/internetsociety/dppolicy>On *Friday
>> January 12 2018* at *Noon EST *the *Internet Society New York Chapter*
>> <http://isoc-ny.org/> (ISOC-NY), in partnership with the *Greater
>> Washington DC Internet Society Chapter <http://isoc-dc.org/> *(ISOC-DC)
>> presents *Digital Preservation: Policy Challenges
>> <https://www.eventbrite.com/e/digital-preservation-policy-challenges-with-vint-cerf-tickets-41793714124>* at
>> Google?s offices in Washington DC. This event, the first of a series,
>> focuses on the policy aspects of digital preservation: Is there a role for
>> regulators? Should there be global standards? Should those standards be
>> patent-free? Keynote speaker will be *Vint Cerf*, Chief Internet
>> Evangelist, Google. Respondents: *Michelle Wu*, Director of the Law
>> Library, Georgetown Law; and *Kate Zwaard*, Chief of. National Digital
>> Initiatives, Library of Congress. The event will be webcast live on the *Internet
>> Society Livestream Channel
>> <https://livestream.com/internetsociety/dppolicy>* with open captioning.
>>
>> *What: Digital Preservation: Policy Challenges
>> <https://www.internetsociety.org/blog/2018/01/preserving-future-one-bit-time/>*
>> *Where: Google, Washington DC*
>> *When: Friday January 12 2018 ? Noon-2pm*
>> *About: https://www.internetsociety.org/blog/2018/01/preserving-future-one-bit-time/
>> <https://www.internetsociety.org/blog/2018/01/preserving-future-one-bit-time/>*
>> *Register (in person or webcast): Eventbrite
>> <https://www.eventbrite.com/e/digital-preservation-policy-challenges-with-vint-cerf-tickets-41793714124> | Facebook
>> <https://www.facebook.com/events/459935521071213/>*
>> *Webcast: https://livestream.com/internetsociety/dppolicy
>> <https://livestream.com/internetsociety/dppolicy>*
>> *Captioning: http://streamtext.net/player?event=CFI-ISOC
>> <http://streamtext.net/player?event=CFI-ISOC>*
>> *Twitter: @isocny + #digitalpreservation <http://bit.ly/isocnydp>*
>>
>> Comment <http://isoc-ny.org/p2/9792#respond> See all comments
>> <http://isoc-ny.org/p2/9792#comments>
>>
>>
>> *?Permalink*
>>
>> ?http://isoc-ny.org/p2/9792?
>> <https://mail.google.com/mail/u/0/%E2%80%8Bhttp://isoc-ny.org/p2/9792%E2%80%8B>
>>
>>
>>
>>
>> --
>> ---------------------------------------------------------------
>> Joly MacFie 218 565 9365 <(218)%20565-9365> Skype:punkcast
>> --------------------------------------------------------------
>> -
>>
>
>
>
> --
> ---------------------------------------------------------------
> Joly MacFie 218 565 9365 Skype:punkcast
> --------------------------------------------------------------
> -
>
>
> ------------------------------
>
> Message: 7
> Date: Fri, 12 Jan 2018 09:25:18 -0800
> From: Jenna Burrell <jenna1 at gmail.com>
> To: Yosem Companys <ycompanys at gmail.com>
> Cc: "Science & Technology Studies" <STS at nic.surfnet.nl>,
> STSGRAD at googlegroups.com, AIR <air-l at aoir.org>, Stephen Paff
> <stephen.paff at gmail.com>
> Subject: Re: [Air-L] Any sociological or STS research on machine
> learning?
> Message-ID:
> <CABOqA1rhGfBmT==h9ZXJwgjic+9pYgSXgrdqTXEAMQDDR+2MHQ at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi Stephen and AIR-L,
>
> Yes, there's a lot of work by sociologists and STS researchers on machine
> learning, including books published in the last year or about to come out...
>
> Virginia Eubanks book *Automating Inequality: How High-Tech Tools Profile,
> Punish and Police the Poor* is about to come out. I believe it's an
> ethnography and that it deals, at least in part, with applications of
> machine learning (in areas like predictive policing).
>
> There's a new book out by STS scholar Adrien Mackenzie *Machine Learners* -
> https://mitpress.mit.edu/books/machine-learners
>
> Also look at what Nick Seaver has written. He has an ethnography coming out
> on music recommendation systems/algorithms (http://nickseaver.net/)
>
> Malte Ziewitz did an ethnography of the search engine optimization industry
> and has done lots of work in this space - http://zwtz.org/
>
> Marion Foucade has a deeply sociological read on this topic and has written
> a great piece about the "mechanisms" that produce "classification
> situations" which are consequential to life circumstances (she doesn't use
> the phrase machine learning, but certainly ML compose some of the
> underlying 'mechanisms' she is concerned with) - http://www.
> sciencedirect.com/science/article/pii/S0361368213000743
>
> I've also written something in this space: "How the machine ?thinks?:
> Understanding opacity in machine learning algorithms"
> http://journals.sagepub.com/doi/abs/10.1177/2053951715622512 - I'm a
> sociologist and an ethnographer, though this particular piece isn't
> ethnographic.
>
> This list just scratches the surface ... there's just so much work coming
> out in this space so I'll just offer some names of other people to look
> into: Solon Barocas, Karen Levy, Kate Crawford, Christian Sandvig, Tarleton
> Gillespie, Angele Christen, Mike Ananny, Nick Diakopolous, Luke Stark. Plus
> lots of people doing work in this space at Data & Society (
> https://datasociety.net/).
>
> Jenna Burrell
> Associate Professor
> School of Information
> UC-Berkeley
>
>
> On Fri, Jan 12, 2018 at 8:29 AM, Yosem Companys <ycompanys at gmail.com> wrote:
>
>> From: Stephen Paff <stephen.paff at gmail.com>
>>
>> Hello everyone,
>>
>> I am conducting research into the anthropology of machine learning. Does
>> anyone know of ethnographies of the development, implementation, and/or use
>> of machine learning algorithms? Are there any sociologists, STS
>> researchers, or scholars from other related fields studying machine
>> learning whose work I should look into as well?
>>
>> Sincerely,
>> Stephen Paff
>> _______________________________________________
>> The Air-L at listserv.aoir.org mailing list
>> is provided by the Association of Internet Researchers http://aoir.org
>> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/
>> listinfo.cgi/air-l-aoir.org
>>
>> Join the Association of Internet Researchers:
>> http://www.aoir.org/
>>
>
>
> ------------------------------
>
> Message: 8
> Date: Fri, 12 Jan 2018 13:57:25 -0500
> From: Roberge Jonathan <Jonathan.Roberge at UCS.INRS.Ca>
> To: Jenna Burrell <jenna1 at gmail.com>, Yosem Companys
> <ycompanys at gmail.com>, "jgduchesne at hotmail.com"
> <jgduchesne at hotmail.com>, "marius.senneville at gmail.com"
> <marius.senneville at gmail.com>, "Morin, Kevin"
> <Kevin.Morin at ucs.inrs.ca>
> Cc: Stephen Paff <stephen.paff at gmail.com>, "STSGRAD at googlegroups.com"
> <STSGRAD at googlegroups.com>, Science & Technology Studies
> <STS at nic.surfnet.nl>, AIR <air-l at aoir.org>
> Subject: Re: [Air-L] Any sociological or STS research on machine
> learning?
> Message-ID:
> <376ACBC5C0F4F94989643C173F667F7F08D7A9C5D9 at Boreas.INRS-UCS.UQuebec.Ca>
>
> Content-Type: text/plain; charset="Windows-1252"
>
>
> Hi Stephen,
>
> Many thanks for your question and for rising this pressing issue (i.e. that we social scientist might need to work faster and better on AI and ML).
>
> While I would mostly follow Professor Burrell comment, I would also like to point out how there?s two intertwined bodies of literature here. One deals with the last 5 years or so of what has been written on algorithms and algorithmic cultures. Atop of the mostly American list provided, I would suggest works coming from David Beer, Rob Kitchin or our very own, Robert Seyfert and I? book on Algorithmic cultures.
> (see https://www.routledge.com/Algorithmic-Cultures-Essays-on-Meaning-Performance-and-New-Technologies/Seyfert-Roberge/p/book/9781138998421).
>
> The second body of literature is indeed smaller as it particularly deals with Artificial intelligence and all of its latest twists (ML, Deep Learning, etc.). I would once more agree with Professor Burrell that Adrian Mackenzie' new book has good chance of becoming the equivalent of the important, but maybe more philosophical book on Superintelligence by Nick Bostrom. Of late, I?ve also come across several interesting pieces (some new, some rather old or outdated but that could serve as a sort of archeology). Here they are:
>
> Carley, K. M. (1996). Artificial Intelligence within Sociology. Sociological Methods & Research, 25(1), 3 30. https://doi.org/10.1177/0049124196025001001
> Dreyfus, H. L., & Dreyfus, S. E. (1986). Mind over machine: the power of human intuition and expertise in the era of the computer. New York: Free Press.
> Fox, S. (2016). Domesticating artificial intelligence: Expanding human self-expression through applications of artificial intelligence in prosumption. Journal of Consumer Culture, 146954051665912. https://doi.org/10.1177/1469540516659126
> Gunkel, D. J. (2012). Communication and artificial intelligence: Opportunities and challenges for the 21st century. communication+ 1, 1(1), 1?25.
> Hoffman, S. G. (2017). Managing Ambiguities at the Edge of Knowledge: Research Strategy and Artificial Intelligence Labs in an Era of Academic Capitalism. Science, Technology, & Human Values, 42(4), 703 740. https://doi.org/10.1177/0162243916687038
> Natale, S., & Ballatore, A. (2017). Imagining the thinking machine: Technological myths and the rise of artificial intelligence. Convergence: The International Journal of Research into New Media Technologies, 135485651771516. https://doi.org/10.1177/1354856517715164
> Stilgoe, J. (2017). Machine learning, social learning and the governance of self-driving cars. Social Studies of Science, 030631271774168. https://doi.org/10.1177/0306312717741687
> Tripathi, A. K. (2017). Hermeneutics of technological culture. AI & SOCIETY, 32(2), 137 148. https://doi.org/10.1007/s00146-017-0717-4
>
> My apologies for all the typos; I?m a Francophone who doesn?t like working on Friday afternoon that much !!!:)
> Best, JR
>
> Jonathan Roberge
>
> Professeur-chercheur agr?g?
> Titulaire de la Chaire de recherche du Canada sur les Nouveaux Environnements Num?riques et l'Interm?diation Culturelle (NENIC Lab)
>
> Institut national de la recherche scientifique
> Centre Urbanisation Culture Soci?t?
> 490, rue de la Couronne, Qu?bec, Qc., Canada, G1K 9A9
> T?l. 418-687-6401, fax. 418 687-6425
> ________________________________________
> De : Air-L [air-l-bounces at listserv.aoir.org] de la part de Jenna Burrell [jenna1 at gmail.com]
> Date d'envoi : 12 janvier 2018 12:25
> ? : Yosem Companys
> Cc : Stephen Paff; STSGRAD at googlegroups.com; Science & Technology Studies; AIR
> Objet : Re: [Air-L] Any sociological or STS research on machine learning?
>
> Hi Stephen and AIR-L,
>
> Yes, there's a lot of work by sociologists and STS researchers on machine
> learning, including books published in the last year or about to come out...
>
> Virginia Eubanks book *Automating Inequality: How High-Tech Tools Profile,
> Punish and Police the Poor* is about to come out. I believe it's an
> ethnography and that it deals, at least in part, with applications of
> machine learning (in areas like predictive policing).
>
> There's a new book out by STS scholar Adrien Mackenzie *Machine Learners* -
> https://mitpress.mit.edu/books/machine-learners
>
> Also look at what Nick Seaver has written. He has an ethnography coming out
> on music recommendation systems/algorithms (http://nickseaver.net/)
>
> Malte Ziewitz did an ethnography of the search engine optimization industry
> and has done lots of work in this space - http://zwtz.org/
>
> Marion Foucade has a deeply sociological read on this topic and has written
> a great piece about the "mechanisms" that produce "classification
> situations" which are consequential to life circumstances (she doesn't use
> the phrase machine learning, but certainly ML compose some of the
> underlying 'mechanisms' she is concerned with) - http://www.
> sciencedirect.com/science/article/pii/S0361368213000743
>
> I've also written something in this space: "How the machine ?thinks?:
> Understanding opacity in machine learning algorithms"
> http://journals.sagepub.com/doi/abs/10.1177/2053951715622512 - I'm a
> sociologist and an ethnographer, though this particular piece isn't
> ethnographic.
>
> This list just scratches the surface ... there's just so much work coming
> out in this space so I'll just offer some names of other people to look
> into: Solon Barocas, Karen Levy, Kate Crawford, Christian Sandvig, Tarleton
> Gillespie, Angele Christen, Mike Ananny, Nick Diakopolous, Luke Stark. Plus
> lots of people doing work in this space at Data & Society (
> https://datasociety.net/).
>
> Jenna Burrell
> Associate Professor
> School of Information
> UC-Berkeley
>
>
> On Fri, Jan 12, 2018 at 8:29 AM, Yosem Companys <ycompanys at gmail.com> wrote:
>
>> From: Stephen Paff <stephen.paff at gmail.com>
>>
>> Hello everyone,
>>
>> I am conducting research into the anthropology of machine learning. Does
>> anyone know of ethnographies of the development, implementation, and/or use
>> of machine learning algorithms? Are there any sociologists, STS
>> researchers, or scholars from other related fields studying machine
>> learning whose work I should look into as well?
>>
>> Sincerely,
>> Stephen Paff
>> _______________________________________________
>> The Air-L at listserv.aoir.org mailing list
>> is provided by the Association of Internet Researchers http://aoir.org
>> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/
>> listinfo.cgi/air-l-aoir.org
>>
>> Join the Association of Internet Researchers:
>> http://www.aoir.org/
>>
> _______________________________________________
> The Air-L at listserv.aoir.org mailing list
> is provided by the Association of Internet Researchers http://aoir.org
> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>
> Join the Association of Internet Researchers:
> http://www.aoir.org/
>
>
> ------------------------------
>
> Message: 9
> Date: Fri, 12 Jan 2018 22:17:10 -0500
> From: "Peter Timusk" <peterotimusk at gmail.com>
> To: <air-l at listserv.aoir.org>
> Subject: Re: [Air-L] research ethics again - students and FB
> Message-ID: <036c01d38c1d$00e0f780$02a2e680$@gmail.com>
> Content-Type: text/plain; charset="us-ascii"
>
> I agree with professor Christopher here on the ethics and harm questions
> presented in the thread.
>
>
> Also I disagree with some researchers here, in that, just because something
> is online in public view, does not give us the right to make data with it.
> Example: an open Facebook group cannot be used for data without each
> contributor's express explicit consent to have their content treated as
> data. I work in government where there are laws concerning my use of our
> study data. I may be more conservative in this view of privacy and/or
> copyright.
>
> I want to add and I do not mean to troll but apparently Inuit eat seals and
> have traditional ways of life, I have eaten cows including baby cows as
> veal, apparently some people eat dogs.
>
> Cats apparently rule the Internet.
>
> Peter Timusk B.Math ( statistics), B.A. (legal studies) Graduate school in
> systems sciences.
>
>
> -----Original Message-----
> From: Air-L [mailto:air-l-bounces at listserv.aoir.org] On Behalf Of
> Christopher J. Richter
> Sent: Friday, January 12, 2018 2:57 AM
> To: Sohail Dahdal
> Cc: air-l
> Subject: Re: [Air-L] research ethics again - students and FB
>
> As Sohail Dahdal clarifies, the ethics of human subject research turn not on
> whether the actions subjects are directed to do are laudable, but whether
> the research involves risk of harm to the participants, in this case,
> minors.
>
> Risks for the participants of the study as proposed might also include that
> of being denied social media service in future, which for some folks I know
> would be devastating!
>
> As for Dan's hypothetical, yes, requiring participants to face risk of
> arrest (or of being handcuffed, which can be terrifying and actually can
> hurt, or tear gassed, or billy clubbed or even shot-sometimes the degree of
> real world risk is difficult to assess) is unethical, especially if, as
> minors, they are not legally capable of making decisions about the risk
> themselves.
>
> Christopher J. Richter, Ph.D.
> Associate Professor, Communication Studies Hollins University Roanoke VA,
> USA
>
>> On Jan 12, 2018, at 7:46 AM, Sohail Dahdal <sohail.dahdal at gmail.com>
> wrote:
>>
>> Asking your research subjects to do ethical illegal actions could be
> either 'ethical research' or not depending on any risk that you might expose
> your subject to...
>>
>> In that sense highly ethical actions could actually be highly unethical
> research.
>>
>> In the case of FB fake accounts, you have to ask, about the risk to your
> subject including the risk of forming bas habits not just the legal
> implications.
>>
>> Prof Sohail Dahdal,
>> American University of Sharjah
>>
>>> On 12 Jan 2018, at 3:30 am, Dan L. Burk <dburk at uci.edu> wrote:
>>>
>>> So we seem to agree on your second statement.
>>>
>>> Regarding the first: suppose that Charles designs a study that asks
>>> minors (or really anyone) to engage in civil disobedience. Perhaps
>>> he asks them to trespass, with a risk of arrest and an arrest record.
>>> For good reasons, like saving baby seals or giving persons of color
>>> seats at the lunch counter or something.
>>>
>>> Definitely illegal. But also highly ethical behavior on the part of
>>> the study subjects.
>>>
>>> ls Charles behaving unethically in asking them to behave highly
>>> ethically but illegally?
>>>
>>> Not really my rodeo, but I strongly suspect that the behavior is
>>> ethical all the way down.
>>>
>>> Cheers, DLB
>>>
>>>> On 2018-01-10 22:07, Christopher J. Richter wrote:
>>>>
>>>> Ah, but the question is not whether it is ethical for the minors to
> violate a (for them) non-binding agreement, but whether it is ethical for
> the presumably adult researcher to require it of them. And just because
> something is legal, that does not make it ethical.
>>>>
>>>> Christopher J. Richter, Ph.D.
>>>> Associate Professor, Communication Studies Hollins University
>>>> Roanoke VA, USA
>>>>
>>>> On Jan 10, 2018, at 11:37 PM, Dan L. Burk <dburk at uci.edu> wrote:
>>>>
>>>> So, although I am not saying that the study design is ethical, or even
> necessarily a good idea, I would most definitely take issue with either the
> specific assertion that violating an adhesion contract is always unethical
> (it is called an adhesion contract for good reason), and with the more
> general assertion that violations of law are always unethical.
>>>>
>>>> Also, non-trivially, the assertion is a non-sequitur: minors generally
> can't enter into binding contracts, so there is by definition no contract
> for them to violate.
>>>>
>>>> None of that means you should go ahead and do it; only that if you
> decline to do so, it should be for some other reasons.
>>>>
>>>> Cheers, DLB
>>>>
>>>> Dan L. Burk
>>>> Chancellor's Professor of Law
>>>> University of California, Irvine
>>>> ++++++++++++++++++++++++++++++++
>>>> 2017-18 Fulbright Cybersecurity Scholar <b975a236.gif>
>>>>
>>>> On 2018-01-10 09:28, Christopher J. Richter wrote:
>>>> Dear Charles,
>>>>
>>>> TOS agreements are most often legally binding. Requiring minors (indeed
> any study participant, but especially minors) to violate a legal contract,
> whether online or off, is unethical on the face of it.
>>>>
>>>> Then there is the issue of deception, of whom and how interactions on
> the fake accounts are deceiving. Deception, by definition, undermines
> informed consent. Will those who are deceived be debriefed? If not, it's
> problematic.
>>>>
>>>> Christopher J. Richter, Ph.D.
>>>> Associate Professor, Communication Studies Hollins University
>>>> Roanoke VA, USA
>>>>
>>>> On Jan 10, 2018, at 4:44 PM, Charles M. Ess <c.m.ess at media.uio.no>
> wrote:
>>>>
>>>> Dear AoIRists,
>>>>
>>>> What are your thoughts regarding the following?
>>>>
>>>> A research project involves a small number of students, legally minors -
> and requires that they set up fake FB accounts for the sake of role-playing
> in an educational context?
>>>> Of course, fake accounts are a clear violation of the FB ToS.
>>>>
>>>> I know we've discussed the ethics of researchers doing this (with mixed
> results, i.e., some for, some concerned).
>>>>
>>>> But I'm curious what folk think / feel about this version of the
> problem.
>>>>
>>>> Many thanks in advance,
>>>> - charles
>>>> --
>>>> Professor in Media Studies
>>>> Department of Media and Communication University of Oslo
>>>> <http://www.hf.uio.no/imk/english/people/aca/charlees/index.html>
>>>>
>>>> Postboks 1093
>>>> Blindern 0317
>>>> Oslo, Norway
>>>> c.m.ess at media.uio.no
>>>> _______________________________________________
>>>> The Air-L at listserv.aoir.org mailing list is provided by the
>>>> Association of Internet Researchers http://aoir.org [1] Subscribe,
>>>> change options or unsubscribe at:
>>>> http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org [2]
>>>>
>>>> Join the Association of Internet Researchers:
>>>> http://www.aoir.org/ _______________________________________________
>>>> The Air-L at listserv.aoir.org mailing list is provided by the
>>>> Association of Internet Researchers http://aoir.org [1] Subscribe,
>>>> change options or unsubscribe at:
>>>> http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org [2]
>>>>
>>>> Join the Association of Internet Researchers:
>>>> http://www.aoir.org/
>>>
>>> --
>>>
>>> --
>>> Dan L. Burk
>>> Chancellor's Professor of Law
>>> University of California, Irvine
>>> ++++++++++++++++++++++++++++++++
>>> 2017-18 Fulbright Cybersecurity Scholar
>>>
>>>
>>> Links:
>>> ------
>>> [1] http://aoir.org
>>> [2] http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>>> <b975a236.gif>
> _______________________________________________
> The Air-L at listserv.aoir.org mailing list is provided by the Association of
> Internet Researchers http://aoir.org Subscribe, change options or
> unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>
> Join the Association of Internet Researchers:
> http://www.aoir.org/
>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> The Air-L at listserv.aoir.org mailing list
> is provided by the Association of Internet Researchers http://aoir.org
> Subscribe, change options or unsubscribe at: http://listserv.aoir.org/listinfo.cgi/air-l-aoir.org
>
> Join the Association of Internet Researchers:
> http://www.aoir.org/
>
> ------------------------------
>
> End of Air-L Digest, Vol 162, Issue 14
> **************************************
More information about the Air-L
mailing list