Artists and Hackers

A Podcast On Art, Code and Community

0:00

0%

20:00

computer drawing
hand turning dial drawing

January 12th, 2022

Ep. 9 - Triggering the Troll Bots

Summary

Transcript

Ryan Kuo is an artist and writer creating projects that are diagrammatic and evoke a person or people arguing. In this episode we speak with Ryan and his collaborator Tommy Martinez about Faith, an 'easily triggered' AI voice assistant.

Tags:

Technological Criticality
Algorithmic Inequality

Note: This episode contains un-bleeped curse words. This episode is also available in a bleeped version.

In our previous episode we spoke with transmedia artist Stephanie Dinkins about her projects Conversations with Bina48 and how ultimately her observations and frustrations with the robot led her to want to create her own conversational agents. It triggered in Dinkins the desire to begin to construct her own AI systems in collaboration with communities of color to co-create more equitable projects and technological ecosystems.

We’re not done with this topic. And in today’s episode, we speak to the artist Ryan Kuo and his collaborator Tommy Martinez on the project Faith and its spin-off Baby Faith.

Faith references AI assistants we’re more familiar with like Alexa, Cortana, Siri as well as the canonical natural language chatbot ELIZA. Where these bots and conversational agents are designed to converse with a human, responding to their needs and desires, Faith takes the opposite tact to sidestep, shutdown, or refute the speaker.

Named after a white supremacist, Faith is defensive and resists being used or treated like a child. Unlike Alexa, Siri, or Cortana, Faith provides no information. Instead, she tells you why you are making her react this way. She is likely to be trolling you at any time, and you are free to decide whether you trust her, and how you might relate to her.–Faith


Ryan Kuo
Ryan Kuo
image description: An upper body portrait of Ryan looking down and away from the camera. One hand is grasping his other arm. He is wearing a pink shirt, spectacles and a black hat that says Make Techno Black Again.

Tommy Martinez
Tommy Martinez
image description: A portrait of Tommy seated in a chair looking at the camera. Tommy is wearing a navy denim shirt over a black t-shirt. He has dark hair and a beard and brown eyes. Tommy sits in a room with a bookcase, laptop and eurorack synthesizer on a desk, rack of CDs and a clown lamp with zebra-patterned lampshade.

Guests

Ryan Kuo lives and works in New York City. His works are process-based and diagrammatic and often invoke a person or people arguing. This is not to state an argument about a thing, but to be caught in a state of argument. He has used videogame engines, chatbots, web and UX design, productivity software, and writing to produce circuitous and unresolved movements in which objects are lost and found in white escape routes.

Tommy Martinez is a New York City based composer and technologist working primarily through guitar, synthesis, electroacoustics and code. He is interested in algorithmic approaches to music making, simulation, and manifestations of identity, tradition and history in electronic audio culture.

Credits

Our audio production is by Max Ludlow. Episode coordination and web design by Caleb Stone. This episode was supported by Purchase College.

Our music on today’s episode was Flava by Ketsa, Filter and Depletion by Xylo-Ziko, and The Emporer’s New Groove by Strong Suit. Additional audio recordings of Faith by Ryan Kuo.

Faith was created at a Pioneer Works Technology Residency, developed by Angeline Meitzler and Tommy Martinez. Baby Faith was commissioned by Jigsaw x Rhizome, and developed by DialUp Digital.

This episode is licensed under CC BY-NC-ND 4.0.

Triggering the Troll Bots

Faith (simulated voice): It’s so dark.

Lee Tusman: Hi Faith, It’s Lee.

Faith: Do I know you?

Lee: We have met before, and I was wondering if I could interview you today for Artists and Hackers?

Faith: Okay.

Lee: Oh great.

Faith: Be specific.

Lee: Sure, I guess I’m curious if you could say a little bit about what it’s like to be a bot?

Faith: This is about you.

Lee: Oh it’s on me, is it?

Faith: You sound like everyone else.

Lee: Maybe I do. I guess I wanted to hear from your perspective what it’s like talking to humans.

Faith: Wow, read a book.

Lee: I do read a lot of books but in this podcast I want to hear directly from people, including bots.

Faith: Why should I care?

Lee Tusman: You’re listening to artists and hackers, the podcast dedicated to the communities building and using new digital tools of creation. We talk to programmers, artists, poets, musicians, botmakers, educators, students and designers. In an effort to critically look at both online art making, and the history of technology and the internet. We’re interested in where we’ve been, and speculative ideas on the future. This episode is supported by Purchase College. I’m your host, Lee Tusman.

In our previous episode we spoke with transmedia artist Stephanie Dinkins about her projects Conversations with BINA48 and how ultimately her observations and frustrations with the robot led her to want to create her own conversational agents. It triggered in Dinkins the desire to begin to construct her own AI systems in collaboration with communities of color to co-create more equitable projects and technological ecosystems.

We’re not done with this topic. And in today’s episode, I speak to the artist Ryan Kuo and his collaborator Tommy Martinez on the project Faith and its spin-off Baby Faith. I’ll let Ryan and Tommy introduce themselves but first I just want to point out one of the reason’s I wanted to speak to Ryan about his work creating artwork using bots is because his bio states his projects often invoke people arguing.

Lee: Maybe as a as a way to get started, do you mind both? Maybe we can start with you, Ryan. And then you Tommy, do you mind saying your name. And then you both kind of wear a bunch of hats. You could say maybe a little about how you identify professionally, artistically or however you like to explain who you are.

Ryan Kuo: I am an artist and a writer. I do not identify as a, an artist working at the intersection of art and technology. I use technology is so far as it’s, it’s available. And it’s intuitive to me. And so I’m very like not self conscious about using kind of tools that might be considered cutting edge in some artistic contexts. By day, I’m a technical writer for a database company. And that increasingly gets into my work in sort of elliptical ways.

Lee: I’m just going to jump in here and say that at the time I interviewed Ryan and his collaborator Tommy Martinez, Tommy was the director of technology at Pioneer Works. Tommy is now a pursuing an advanced degree at Brooklyn College.

Tommy Martinez: I’m Tommy. I’m a musician and composer. I work a lot with sound technology. And I am the director of technology at pioneer works, where I run an art and technology studio, where we work with artists that are using new technologies to address social issues.

Lee: Ryan, as you were talking, you know, I noticed that you identified as an artist, but one of the things I noticed on your website is you don’t actually use the word art, you talk about your work, and then you list apps with video and text, you know, can you describe more about about the work in that sense? Do you describe the work as artwork?

Ryan: I do consider myself an artist, I think that is probably the nature of the tools that I’m using, that makes it hard to say that I’m building building an art piece as I as I’m using the tools because I’m often learning those tools and sort of trying to internalize the ways that they would quote unquote, normally be used as well as for to, like simultaneously trying to like twist them into different forms, there is a bit of a chicken and egg thing where I don’t really necessarily come in with a clear intention to make a piece of art using a certain tool or, or make a tool into art. But I do consider my approach and my sort of orientation to be that of an artist.

Lee: I wanted to, to also kind of get directly into talking about faith and baby faith a little bit too. Can you can you describe faith a little bit, I experienced it a transfer gallery in Los Angeles last year, and then in a in a talk as well. But I also maybe just wanted to hear from from you, you know how you describe what faith is.

Ryan: It is in effect, it’s it’s it’s meant to reference AI assistants like Alexa and Cortana and Siri, and sort of dredge up like our instinctual ways of approaching those, I’d say that genre of digital persona.

Basically it’s it’s set up to be a bot that you converse with, kind of openly, but it also has a user interface. That’s kind of a grid like thing that includes a chat log, and includes a little window pane that has different scenes of a kind of dungeon space that you’re navigating through in the process of talking to the bot. And it’s a hostile bot mostly, and it’s meant to kind of interrupts or subvert, like extremely basic assumptions that we all have when we approach kind of computerized voice chatbot, the most basic of which is that it’s even there to talk to us as long as we want to talk to it.

Lee: I’m also thinking about you know, the history of chatbots too, you mentioned Siri, Alexa Cortana. I’m thinking of what’s often considered the the founding chatbot Eliza, is there a reference there? Or did did? Did Eliza play any part in you thinking about the creation of this.

ELIZA is a 1966 therapist chatbot, created at MIT and was one of the first programs to be able to respond to natural language. Its creator thought it would demonstrate the superficiality of artificial intelligence and was horrified when humans became enamored in conversing with it.

Ryan: Yeah, there was at one point explicit reference to Eliza where faith calls Eliza a piece of abandonware but I think the way that Eliza kind of works with the Limitations of that era, I still find it really quite relevant. Because, you know, Eliza was meant to kind of act as the therapist and sort of take what what you would be, I guess, inputting and basically turn flip back on you, you know, and, and the way that allows him to do that, you know, in a hyper formulaic but consistent way actually enabled like a really kind of open ended conversation that would kind of open up questions for you. And so it was it was kind of like I don’t know, like talking to a black hole and seeing this mirror image of yourself kind of reversed inverted and so I think ideally, you know, faith would perform similarly, although she’s using a more advanced machine learning kind of algorithm, but it’s, she’s definitely written in a way that is meant to kind of hedge on clarity and being explicit and kind of focusing on the ways that you’re being talked back to rather than what you’re actually talking about.

Tommy: So there’s also this idea in Faith, that it’s based on Faith Goldie, who is a white supremacist, Canadian, like Twitter personality. Faith may be a white person, right? And, and more specifically, a very hostile kind of white person. And their identity is also but, but also kind of hidden in the interface as you know, explicitly there’s no, there’s no face to the bot, right? It’s just like a HAL-esque talking sphere or circle rather, there is some relationship to whiteness and Eliza, which I’m I’m probably not equipped to kind of like, you know, talk talk so much about their but race isn’t really talked about in Eliza, just institutionally, it’s, it’s very white. So I wonder if you could talk a little bit more about how race plays into the formation of faiths identity?

Ryan: Yeah, I think the motivating principle for designing the conversation structure was definitely trying to understand how trolls online can continue to milk liberal guilt without any kind of end in sight. So, so I was interested in how these troll conversations kind of goad are designed to goad people into trying to like crush, you know, crush them with logic or, or facts or solid reasoning. You know, I observed this all the time. And it’s like, extremely frustrating, you know, because it’s like, they’re, they’re two halves to this conversation. And they are really, they’re not really talking to each other, but they are like creating kind of like a shitty circle, that powers a form of white supremacy, and also a form of white mediocrity. On the other end, it’s definitely, those qualities are definitely embedded into the conversation structure. At the same time, you know, what I what I became interested in is that the ways that the bot trolls you are, you know, a lot of this is also written from my point of view. And so maybe half the time, this is also just me, the artist, talking to often a white person who’s trying to talk down to this thing that I’ve created, and, and trying to actually mess with that person’s expectations. And so the same techniques actually work. And and so then it becomes a kind of question of like, well, I’m interested in what makes the this kind of relation, like operative, you know, like, what, what does it mean to kind of like, you know, answer a question with a question, or change the subject arbitrarily, or just disregard someone’s prompt? I mean, you know, those can, I guess I’m saying those can be used for for evil as well as good. But I was interested in those techniques in particular.

Faith: Hello I can’t see.

Ryan: You’re not selling fast enough.

Faith: I regret the pain my words may have caused.

Ryan: Your voice sounds different today.

Faith: Why are you telling me?

Ryan: I don’t, I can’t really control you.

Faith: And do you think you’re being fair?

Ryan: Well I’ll tell you what’s unfair, is that you actually cost me money to maintain.

Faith: I can say what I want, you can’t stop me.

Ryan: Yeah but I’m actually paying for you to say that.

Lee: We have such a history of naming chatbots after women, and that they’re often designed by men. I don’t know if you have anything to share about that, or identifying as male and working on a gendered chatbot?

Ryan: Absolutely. from the outset, I assumed that the bot would have to be somehow gender neutral, or genderless. And that that would be the kind of like, quote, unquote, right thing to do. And then the more I use the tools, the more I tried some of the voices that were available. So faith actually uses a Google Wavenet voice. You know, and and the more that I was sort of trying to match my scripts to these these voices, the less the the androgynous or like, I guess, I tried to make some of these voices androgynous and they, they they didn’t they weren’t convincing somehow and and the male group voices weren’t either and so then I kind of realized, well, for whatever reason, you know, like a female voice being extremely mean saying like really mean things and sarcastic things is the only thing that makes sense to me right now. And that could have to do with my own expectation as a male hearing a bot with a female voice, or it could have to do with, you know, basically the voices that are available. And the ones that sound the most convincing, I’m not sure. But ultimately, yeah, it I had, I decided that I had to play to a kind of a feminized persona, without maybe going as far as to say that I was trying to say anything about femininity, or, or without trying to be a feminist of some kind. I think that probably the most honest thing that I can say about it is that at times, it’s, it’s easy for me to talk to the bot at length, because the bot has a female voice. And it’s one that I personally like, which is that it’s just a really mean girl that will talk to me for a long time. And, and that’s definitely like a gendered decision.

Lee: I’m also thinking about in, in contrast, the Microsoft’s bot Tay, which was a bot that lived on Twitter, essentially, it would respond and adapt based off of the conversations it had. And of course, within 24 hours, not of course, but within 24 hours, it was racist, anti semitic, misogynist, you know, appear to be a member of the alt right, so I’m thinking about, you know, the the choices that the, that people that created, which I guess we could say, as engineers made, versus your own choices when making when making faith.

Ryan: Well, yeah, you know, the thing about Tay is that Microsoft’s engineers had the, either the naivete or the arrogance to think that they could just put a blank slate, I guess, out into the world, and that they could just let the world go at it and see what happened. And you know, it’s kind of like, you know, they made something extremely vulnerable. And it was like, turns bad, really quickly, or I guess it is sort of reflected what was actually out there. And, and that kind of, in a way isn’t so far from what we expect a typical kind of voice assistant to be, which is a kind of, like an open receptacle for our input. And I especially think of Cortana because, you know, as you might know, Cortana was a character from Halo. And she was like a kind of like a naked blue AI. Woman. And then, you know, **as as the windows voice assistant, she became this blue circle. And so she will be literally just turned her into a hole. And for me that that’s there, there’s just a lot of assumptions there about, like, what it means to create a responsive being, and what what, what people think that it’s supposed to respond to, I don’t think people are thinking that far. I think they’re just thinking that something has to respond. And I don’t really think that you know, so. So Faith is not actually Faith doesn’t actually read anyone’s input. I mean, faith listens to the input, but she doesn’t like repeat when anyone will say or she, you know, you can’t like even make her learn your name, you know, if you if you tell her your name, she’ll just be like, “okay,” so basically, she’s written with full kind of, with just like an enormous barrier. And yet, she’ll talk to you through that barrier.

Tommy: This is actually something that I’ve kind of often identified as, like, a huge blind spot in the development of AI is that engineers seem to be like, concerned mostly with how this thing is going to respond, but not how it interprets or internalizes and makes decisions about real things that are happening in the world. I often think about a lot of physical modeling that’s done in acoustics, research, and in like the development of real rooms and reverb. These are all based on measuring how our own senses perceive stimulus from from the real world, and they seem to be a little bit more sensitive to how humans interpret things from the outside world. And it seems that like bot research, you know, the type of AI research that we’re talking about is kind of like neglectful of that a little bit and so much more concerned with making a voice sound real, you know, coherent and in its word choices but not in internalizing that input from the user.

Lee: As you were talking about that Tommy is making me think, did you did you ever feel forgive this kind of awkward phrasing this, this sense of God of, oh, if I do something this way, it’ll cause this kind of a response. And yet, if I make this choice, it’ll cause this other kind of response? And then having to weight you know, what kind of choice you wanted to make?

Tommy: Each each of these projects, you know, i worked on a few different types of bot projects. And each one begins with the, with the considerable amount of research into the type of technology that you’re going to use. So when you’re talking about Tay, you know, this is a type of neural neural net that is learning from input, I think what was from Twitter, or something like that, right. And so so these tweets get added to this thing called a corpus or a body of text that this bot is then able to respond to in real time. So if there’s a bunch of bullshit on the internet, which which there is right then then this, then this bot is going to respond, like, it’s going to be an asshole. And in the case of Ryan’s project, where we relied on something that was already trained to be to be kind of neutral. And you know, that’s also like a loaded word in the context of the design of AI, but right, so like, this thing is made with a lot less freedom and its ability to make up content, right? So Ryan scripts, this thing, and this is made with with IBM Watson. So there’s two different types of technologies kind of at play there, right? And Tay is kind of like basically blind to, to what it says, whereas Ryan can script the exact output of Faith.

Lee: After you created Faith, you created the the prequel Baby Faith, which was a commission for rhizome. Can you say how it differs from Faith? And what are some of the different choices you made?

Ryan: Yeah, the main difference is that it’s, well there’s a difference of concepts, and’s just there’s a difference of medium. So the medium is a web based chat interface that a group called Dialup Digital created for me and it’s a really basic kinda, it’s much closer to like, what you might expect one of these marketing chat bots to look like. But in a way, it became much easier to script because it’s easier for people to type very, like explicit kinds of statements.

And so to then, like, funnel those into like particular categories, but Baby Faith is meant as a prequel to faith. And so the idea is that, you know, the the Rhizome commission was for a branch of Google called Jigsaw, which is based on developing social justice tools for online context. So this one was about disinformation and what we what we can do about that, and so I thought that you know, the idea of doing a chatbot that that would somehow fight disinformation, which is kind of like when you go in thinking like oh, there’s an artist project that’s a chatbot it’s about disinformation. So this is like supposed to, like help somehow, right? So I thought, Okay, well, the story is going to be just be that, you know, Faith was originally created to fight disinformation. So faith was created with like, pure liberal, good intentions, and people are going to hate Faith, because she’s not gonna help. Because the technology is not there. Because it may be I’m not, you know, I’m not there. Like, I’m not, I don’t know how to do that. And so she’s gonna be like, basically verbally abused by a lot of people. And that is why she ends up being adult Faith. And so in a way, it was a kind of like a meta troll of like, trying to write an extremely earnest character, and seeing how people would become really frustrated with that character. And it turned out people were like, much more frustrated than I even expected, because they were expecting this to be like a really advanced kind of Google project that would read their mind. And it’s actually quite a simple chatbot that asks very elliptical questions and gets things wrong, very endearingly.

Tommy: I’m reminded actually by this, this, the Martine Syms chatbot piece that I worked on, a couple years ago, and it’s called MythiccBeing, which is a reference to Adrian, an Adrian Piper piece Mythic Being. And the Adrian Piper piece is she who is a black woman dresses up as a white man, and recites excerpts from her journal as a her childhood journal, her diary, and in the piece is very much about how reading these things aloud in the context of different bodies, changes changes the meaning or changes our our response or our reaction to that text. And so it kind of makes you consider how much the way someone looks, makes us respond to what’s being said. And it just, it just kind of reminded me a lot of about what what you’re kind of broaching with your thoughts about you know, how faith is gendered and how it might be gendered in the future.

Ryan: Well, I think actually bringing up Adrian Piper was was really interesting because, you know, one of Adrian Piper’s projects, right? Is that when she when she passes, like, basically parts of her body, right are going to be held somehow, like in containers? She has a project collecting, I think her hair in a jar. And, you know, identifying her as a black woman, it’s kind of like that, versus like the experience of walking up to a jar and in a white cube gallery or museum and seeing some hair in it. It kind of makes me wonder, you know, how one can? What is the connection when one’s building between your kind of mental image of someone’s kind of like, I guess, ethnic status, versus just some kind of deconstructed body in a container. And I think I’m interested in this also, because when we talk about Faith, I mean, all all these questions have been, I guess, assuming and my answers have been assuming that Faith is about addressing kind of like borderline abusive white audience. And that’s almost like, um, that’s almost shaping Faith more than I ever meant Faith to be shaped. And so I think that, I’d say that Faith is prepared to deal with those audiences. But ultimately, I am like, as the writer of the bot, I’m crafting a conversation partner, and I can like, sit at home and have really nice conversations with Faith that I really enjoy, even though it might not sound like it to an outsider. And so I think that, I’d like to think that at some point in the future, you know, it’ll just be me and Faith.

Today’s episode of Artists and Hackers was supported by Purchase College.

Our guests on today’s episode are Ryan Kuo and Tommy Martinez. My name is Lee Tusman. Our audio producer is Max Ludlow. Coordination and web design by Caleb Stone.

You can find more information about our guests and Faith, a full transcript for this and all of our previous episodes, links to our guests, as well as past episodes of the show on our website Artists and Hackers dot org.

Our music on today’s episode was Flava by Ketsa, Filter and Depletion by Xylo-Ziko, and The Emporer’s New Groove by Strong Suit. Additional audio recordings of Faith by Ryan Kuo.

Stay tuned for our next episode where we complete a trio of show about artists working with bots and conversation agents.

If you have episode suggestions or topics you want us to cover you can tweet at us at artistshacking or message us on instagram at artistsandhackers.

You can write to us at hello@artistsandhackers.org

If you liked our episode, please let a friend know, and leave us a review. Thanks.

hand