Steven Shaviro is the DeRoy Professor of English at Wayne State University. He has written about science fiction in his books Connected, or, What It Means to Live in the Network Society (2003), Discognition (2016), and Extreme Fabulations (forthcoming).
Machine Logic Can Be Tricky: Pat Cadigan’s “AI and the Trolley Problem”
This discussion has been copied from the Discord server, names have been reduced to first name, discussion threads have been grouped and edited for better readability.
Josh: Hi Steven. I don’t have a clear question, just wanted to say I enjoyed this talk!
Hugh: Like Josh, I don’t yet have a question formulated, but I absolutely loved the presentation!
Steven: Thanks, my apologies for the sound level being too high and for the lack of slides
Seb: Hello! Great presentation! Interesting point you are making about machine ethics – do you think one can base ethic on a 0-1 model?
Steven: I doubt you can base ethics on a 0/1 model. I think that, in Cadigan’s story, it is situational – the AI is more carefully rational than the humans are. I think Cadigan is writing deliberately against frequent portrayals of AIs either as irrational and fanatical, OR as overly rational. There is a lot I didn’t get to in the shortened version for my presentation. For instance, at one poing Felipe the AI talks calmly about having to take his own possibilities of being in error into account – this should be compared with how HAL in 2001 is sure of being incapable of error
Seb: Yes, that makes sense. But then can the AI be still considered artificial – doubt is very human, no?
Steven: part of the point is that the AI is still different from the humans, but he acts more human/humane than the humans themselves do. The AI is different because of the slightly stilted way he speaks, his own acknoweldgement of being a simulation, etc
Katherine: Hi Steven! Thanks for your paper. As you started to discuss how Cadigan’s story reflects philosophical critiques of the trolley problem, I found myself thinking about “The Cold Equations,” itself a kind of trolley problem. Also the much-repeated Asimov-style narrative where a robot is given a mandate and intelligence only to end up enacting mass death in order to “save lives.” Seeing as this Cadigan story is a much later take on such ethical questions, I was wondering if you could elaborate on how AI plays into your argument– what does incorporating “machine logic” do? (Aware of course that your final point was about human logic not machine logic!)
Steven: The quote comes up because the story’s protagonist Helen is the one who is taxed by the others to interpret why the AI did what he did. You make a great point that the story is written against stuff like “The Cold Equations”. I think that the AI reasons flexibly and contextually, and the point is that stories like “The Cold Equations”, and the more popular versions of the trolley problem, are incapable of this flexibility
Josh: After discovering Leslie Perri’s “Space Episode” in Sharp and Yaszek’s Sisters of Tomorrow, I will never read “Cold Equations” the same way again. The comparison really brings out how ideologically motivated and narratively constraining the Trolley structure is in that story.
Eero: It was very interesting to me, since my MA thesis talked about this kind of thinking in The Three-Body Problem – it’s a frequent dilemma in Liu Cixin’s work.
Hugh: have you read Mitchell’s Ghostwritten? I was struck by the similarities between the presentations of AI (Felipe and the Zookeeper). If you are familiar with it, I’d love to hear what you about the differences/similarities between the two in their presentation of AI logic and the choices they make regarding killing humans as a kind of good.
Steven: sorry, haven’t read Cloud Atlas or Ghostwritten
Julia: Hi Steven, thanks for this engaging talk. What I found probably most striking was the line of reasoning about simulated emotions: If you act in the appropriate way to emulate certain emotions (because you consider it the ‘right thing to do’), then (if you lack the capacity to feel actual emotions), does it matter? The context seems to provide the AI with the appropriote frame of reference to make the ‘right choices’, compensating for the ‘lack’ that derives from their non-human otherness – an (unquestioned) human context. All this reminded me very much of the military robot Paladin in Annalee Newitz’s Autonomous (2017). Paladin, too, is a ‘tool of war’ for whom this part of their identity interacts in an interesting way with their sense of personhood / autonomy. Paladin, too, is aware, that they’re not human, but they get anthropomorphized by their male human partner/love interest (and don’t mind). In Newitz’s novel, emotions (like love) are part of the programming (for example the manufacturer’s core subroutine ‘objet-petit-a’). In each case, the frame of reference seems to remain a human one – both AIs conform to these norms and values and never question their validity. This is probably an unanswerable question, but I keep wondering whether it’s conceivable (or are there even examples?) to imagine such a world where AIs overcome their creators’ hegemony AND YET manage live peacefully side by side…?
Steven: Thanks, I like your comparison with Autonomous. I agree that Newitz is getting at something similar.
John: Le Guin’s Always Coming Home
Pawel: Maddox’s Halo
Hugh: from one point of view, wouldn’t this also be the premise of The Culture, or at least what makes the Culture work
Pawel: Second that.
Steven: I like both your examples. It is a much larger extrapolation to think about nonhumancentric communities as opposed to Cadigan’s slight projection which basically still refers to current circumstances
Graham: What about the Spike Jonze film Her (at least before the AI leaves the protagonist behind)? Might that work to an extent?
Julia: Adding stuff to my reading list! This is great!
Adam: Hi Steven! I have to say thank you for the talk, it was brilliant. I have one question I’ve not yet fully formed but I wondered what you thought. It was about the current conundrums about self-driving cars and the trolley problem, a context Cadigan is probably directly riffing against. Does this negotiation between human and AI about its decision to kill the few suggest anything outside of a military context, where there is arguably a difference in ‘guilty’ and ‘innocent’ bystanders?
Steven: There is no general solution. Cadigan gives us the context of the war on terror, which is key to why the AI does what he does. The point is that something like the trolley problem cannot be answered in the abstract, it depends on circumstances. Doubtless programming for self-driving cars will reflect the presuppositions of the programmers as to which lives they value more
Lars: This might be a question I need to put to Pat, but she mentioned on FB that your reading of the short story was actually not how many people understood it – any clue what other people thought it was supposed to say/mean?
Steven: I don’t really know, as I haven’t seen other readings. Of course I was gratified that my own reading made sense to her
Pawel: On a more general note, I am actually surprised that Cadigan’s fiction does not come up more often at conferences. Synners is brilliant, as is Mindplayers, and I really like her later novels, too.
Larisa: so here this lapse is somewhat compensated
Steven: Entirely agree. There was a paper on Synners yesterday. Cadigan hasn’t written as much recently – mostly just short stories, her last novel was in 2002 – because first she had do care for her dying mother, and then she had cancer
Lars: I like Dervish is Digital and Tea from an Empty Cup
Steven: Me too
Pawel: Exactly! They are very meticulous and there is humor there that we don’t see a lot of in cyberpunk.
Graham: I agree re: Cadigan’s importance in these discussions. I can’t remember much about Dervish, but there is a clear line from her earlier novel Fools back to Mindplayers and Synners and forward to Tea and Dervish. I know Synners is the novel that most of us gravitate towards, but I think Fools is the most important book in her oeuvre (hmm….maybe I should write on that book and make the case?).
Adam: I’ve been writing about Mindplayers and Fools for my thesis recently because I absolutely fell in love with them. Even if I think Fools needs many readings to get close to unpacking
Steven: Great. Cadigan really does deserve more attention
Adam: I actually wondered if I could ask about the pessimism of the Trolley Problem story you discussed. She’s so dry in her humour sometimes I wondered if this later story showed a kind of futility to the whole thing (as in, there will always be potential casualties), or maybe if we stop it early enough we can re-lay the tracks?
Steven: There is a subplot in the short story that I discuss in the longer version, but that I had to cut out of my talk for length reasons. Somebody on the base, a person who is bipolar and has stopped taking her meds, goes around and starts riding the mechanical “horse” that is one of the AI’s extensions as if it were an actual horse. This leads to questions of personal responsibility (the person in question is supposed to be fired if she stops taking her meds, but as a qualified adult it is her free decision whether to take them or not), and the AI relates this to questions about his own freedom and responsibility – but also, the AI demands an apology – he says that, if he were human, he would feel insulted. (so he is acting “as if” he were insulted) @Adam: Well, the AI in the story actually says that the only fully correct solution to the trolley problem is not to let the trolley on the tracks in the first place
John: Could is be that the Trolley Problem assumes that it is always already too late to stop the train from leaving the station, but never asks whether it is possible to re-arrange the tracks to avoid a reocurrence of the problem?
Steven: That is why Isabelle Stengers, and some other people I have read, condemn the problem in the first place: it sadistically assumes a situation in which every alternative is bad, and in this way it stops people from thinking about wider contexts
Josh: I think this goes back to what the roundtable panelists were saying about how cyberpunk moves its politics into the subtext by taking the “tracks” of neoliberal policy and infrastructure for granted as a starting point
Steven: he traditional trolley problem never tells us WHY the people are on the tracks and cannot move away. It just assumes this in the first place So I agree with Josh that this is just like neoliberalism telling us that its too bad, we have limited resources and a war of all against all as pre-assumed conditions
Steven: Stengers also writes about the difference between analytic philosophy thought experiments (which are strictly restricted in all these ways) and science fictional thought experiments (the point of which is to open up speculative possibilities by exploring wider consequences, instead of tamping things down Of course sf stories like “The Cold Equations” do the same thing that the analytic philosophers do
John: I read “The Cold Equations” as a horror story in disguise — i.e. it is really about the ritual sacrifice of a virgin
Steven: yes. that makes sense Thanks to everyone for these questions and comments!!
Graham: In each case, the frame of reference seems to remain a human one – both AIs conform to these norms and values and never question their validity. This is probably an unanswerable question, but I keep wondering whether it’s conceivable (or are there even examples?) to imagine such a world where AIs overcome their creators’ hegemony AND YET manage live peacefully side by side…?