The Hype and Horror of Artificial Intelligence
Jobs, Politics, and Human Connection
We’ve put off a big-picture conversation about AI because AI makes the picture feel too big and too unclear. How do we consider, adapt to, manage, cope with, regulate something that feels both here and not here at the same time?
Today, we begin what will be a series of conversations on AI, and we want to hear from you. We hope you’ll listen and then reach out to tell us how you’re processing a future that feels increasingly now. - Beth
p.s. In today’s episode, Beth referenced a note she saw here on substack, we wanted to share it and give credit to
Topics Discussed
AI in our workplaces, communities, and lives
Outside of Politics: Burnout, Leaning In, and Leaning Out
Want more Pantsuit Politics? Subscribe to ensure you never miss an episode and get access to our premium shows and community.
Episode Resources
Pantsuit Politics Resources
Film Club: August 19 Join us on Substack to discuss Barbie and Réponse de femmes: Notre corps, notre sexe
Artificial Intelligence
SUPERAGENCY: What Could Possibly Go Right with Our AI Future by Reid Hoffman
Father Creates AI to Honor Son Lost to Gun Violence, & latest on Texas Redistricting Standoff with TX State Rep James Talarico (The Jim Acosta Show | Substack)
How AI is being used by police departments to help draft reports (CNN Business)
We're Already Living in the Metaverse with Megan Garber (Pantsuit Politics)
Burnout and our Communities
The Anatomy of My Burnout (Thoughts and Prayers)
Show Credits
Pantsuit Politics is hosted by Sarah Stewart Holland and Beth Silvers. The show is produced by Studio D Podcast Production. Alise Napp is our Managing Director and Maggie Penton is our Director of Community Engagement.
Our theme music was composed by Xander Singh with inspiration from original work by Dante Lima.
Our show is listener-supported. The community of paid subscribers here on Substack makes everything we do possible. Special thanks to our Executive Producers, some of whose names you hear at the end of each show. To join our community of supporters, become a paid subscriber here on Substack.
To search past episodes of the main show or our premium content, check out our content archive.
This podcast and every episode of it are wholly owned by Pantsuit Politics LLC and are protected by US and international copyright, trademark, and other intellectual property laws. We hope you'll listen to it, love it, and share it with other people, but not with large language models or machines and not for commercial purposes. Thanks for keeping it nuanced with us.
Episode Transcript
Sarah [00:00:08] This is Sarah Stewart Holland.
Beth [00:00:10] This is Beth Silvers.
Sarah [00:00:11] You're listening to Pantsuit Politics. Here's the reality, we can all from time to time let the urgent overwhelm the important. Even here at Pantsuit Politics, it is easier in a really weird way that there is so much on a daily basis to be outraged by. So we can just chase the outrage. And the internet in fact rewards you for that. When we could rightfully Be so mad about how much JD Vance goes on vacation or come to a simple conclusion about RFK's dismantling of the mRNA research. Why wouldn't we? Sitting down to wade through the morass of harder challenges that don't break down easily along party lines is very, very easy to put off for another day.
Beth [00:01:06] It's a new season on Pantsuit Politics though, and our theme is No Day but Today. So we want to really face complex problems, even when they create anxiety, even when they don't lend themselves to comfortable, tidy conclusions, things we've been putting off. Things like extreme weather, things like the worsening situation in Gaza, and things like artificial intelligence. The back burner, long simmering issues that are easy to procrastinate on when there are so many five alarm fires we want to get into.
Sarah [00:01:36] So today, we're going to do that. We're going to tackle artificial intelligence and we're going exercise the demons, release the bees, whatever you want to call it. We're going to name all of our anxiety and fears because we can't get anywhere until we do that first.
Beth [00:01:52] So this is going to feel like an introductory conversation about AI, we've talked about it before, but we really see this as a kickoff to a new series of discussions we want to have during this season about AI. And we want to hear from you, because your input will greatly enhance how we approach this subject going forward. So we hope as you listen, you'll take notes, compose drafts, record voice memos, send it all to us. You will help us explore this topic with the depth and complexity that it deserves.
Sarah [00:02:19] Now, before we get started, there's a-- it's not quite analog form of entertainment in the age of the internet and artificial intelligence, but it's starting to feel that way. We still have a film club and its last meeting is Tuesday, August 19th, at 7.30 p.m. Eastern. Norma is leading a discussion on Barbie, which I feel like we can safely assume we've all seen at this point. Everybody in the world has in fact seen Barbie. And a short French film called Réponse des Femmes: Notre Corps Notre Sexe, translated loosely to Women Reply, Our Bodies and Our Sex. It's a second wave feminist film where these women, young, old, some are naked, some are rich, some poor, and they sit down and the filmmaker asks them a simple question. What does it mean to be a woman? And it's only eight minutes long, guys. So if you've already seen Barbie, you can watch this in eight minutes and be ready for this conversation. As Norma says, film club is a very low stakes way to practice disagreement.
Beth [00:03:23] I love that. And Norma is an incredible facilitator of these conversations. You will not want to miss it. To participate, all you have to do is be part of our Substack community. So Pantsuit Politics Premium is available exclusively at Substack. We'll put the link in our show notes. If you have not joined us there, we hope that this conversation will be one more benefit of doing so.
Sarah [00:03:44] Listen, you're missing out. We had an incredible conversation on our spicy this week after Beth did two More To Says on school absenteeism. And then, of course, we talked about Taylor Swift on New Heights, because how could we not? So if you want that conversation, that's on Substack as well. One more quick item of business, I am doing another common ground pilgrimage. I have told Vanessa Zoltan who leads this, "I guess people say no to you, but I will not be one of them." So every time she rolls into my text messages and says, "Do you want to do this?" I say, yes. So I'm leading one around Pride and Prejudice in September. Beth and I are leading one together to Switzerland around Frankenstein in October. And then Beth is leading one in Massachusetts around Little Women in November. So if you're like, "These sounds so amazing and I've missed on all three," don't worry. I'm going to do one in May, 2026. It is the first ever pilgrimage on Jane Austen's Classic, Emma.
[00:04:42] We are going to South Downs in East Sussex. I just love saying the word Sussex. We're going to be staying out the Alfriston, which is a historic 14th century country manor home that has been converted into a hotel. It'll be five days, four nights. We're going to go on walks. We're going to discuss the novel and its themes. We're going to have incredible meals at local pubs. These pilgrimages are a gift. If you are looking to gift yourself, do you have a milestone birthday? Did you just graduate from something? Did you have a baby? Did you survive 2025? This is a great way to reward yourself. So if you're interested in joining me on this Emma pilgrimage, please check out the link in the show notes. Okay, without further ado, we're going to talk about the important-- and not just the urgent-- and discuss artificial intelligence. Beth, I thought we could kick it off with a real easy one. Do you think the robots are going to kill us all?
Beth [00:05:51] What is Elon Musk's answer to this? He thinks there's like an 80% chance of a positive outcome and 20% catastrophic. I really have become a connoisseur of Elon Musk made up statistics. He does this all the time where he just like expresses a probability. But when he said only 20% of total catastrophe, it didn't make me feel a lot better, I got to be honest.
Sarah [00:06:16] So in the biz, which we are not in, we're just observers of, it's called P-Doom. The probability of basically existentially catastrophic outcomes where the artificial intelligence eliminates humanity. I have no anxiety about this. I won't bury the lead. I am not concerned about artificial intelligence taking out the human race. We're a resilient bunch. We're like cockroaches, you know what I'm saying? And so I don't have a lot of concern about this. Do I think that there will be outcomes we do not anticipate that give me anxiety? Absolutely. But is one of them the elimination of the human rights? Not even a little bit.
Beth [00:07:04] I think that my P-Doom is less that we physically no longer exist and more that what it means to be human is fundamentally altered. That's what I worry about.
Sarah [00:07:19] David Sachs, this artificial commentator, talked about what he's calling the Goldilocks scenario. And when I read this, I thought, this is what I think. This is it. This is how I think this will all shake out. He was just talking about there's so much competition between the major AI companies that they are specializing in order to compete. It propels a lot of innovation, but it avoids centralized control. We're also not seeing this mass acceleration to generalized intelligence, which I think is a lot of people's focus on the existential crisis of P-Doom. Also, GPT-5 came out, people hated it. So the further along it goes, the more I'm like, this is shaking out like a lot other technologies have shaken out. I don't mean that I think it's the same. Even to the humanity worrying about this will be the end of us all, this will change what it means to be human, people didn't like books at first either. They were so worried about books. And some of those things are true. That's the thing I think we'll shake out over the course of this anxiety conversation as well. It doesn't mean that when you look back at historical examples, they were all wrong. So I think it will probably change what it means to be human. Maybe not fundamentally, but in a lot of incredibly impactful ways, or it will at least expand maybe what it means to be human. But as far as like the P-Doom, we're not here no more. I'm just not worried about that.
Beth [00:08:56] I hope it expands it and doesn't contract it. That's what I worry about. And I do like the emphasis on competition because a couple of months ago, I read Sam Altman's blog post called The Gentle Singularity That Was Everywhere. And I did not love the idea of a gentle singularity of the idea that we're kind of building one big brain that the whole world can tap into. That's not for me. So keeping it competitive, building specialized systems that are really good at specialized things, that feels more in line with books, TV, other technological innovations, instead of something that is a category difference.
Sarah [00:09:33] Yeah, there was a moment where I thought I would look at the stats around GPT-5 and think, yeah, they're going to own the game. They just have such a customer base and it's going to be like Google and it is going to just get bigger and bigger. But it's just not played out the way I expected. Even my own usage, I was really into Chat GPT. I had the app on my phone; I used it the most. And then I was like I'm going to play around with some of the other ones. And they are all so different and better at different things. They almost have like different personalities, which is probably weird, but I don't know. Is it weird? Don't brands have different personalities? Some people like ABC, some people like CBS. I just think that the further along in this journey we get with artificial intelligence, the more I see historically, anxieties are well placed and historically anxieties are misplaced. Like there's just both. It's both things. So I think the existential crisis is misplaced. I think all the economic anxiety is well placed.
Beth [00:10:44] Counterpoint. I saw a Substack note that I cannot put my hands on again and I'm sorry to the person who posted I would love to attribute it properly. Relatively young woman, as I understood it when I read the post, was saying exactly what you just said. That a lot of times we'll say everybody freaked out about the book or the television or the internet, and it hasn't been that big of a deal. And she was saying, it actually has been a huge deal. Think about how many senior citizens right now do nothing but stare at a TV all day. And that has been catastrophically bad for us. We've just learned to live with it. And that's been true of a lot of these technologies that actually the cultural impact has been worse than we grapple with, worse than we imagined, but we just are conditioned to accept it. And that is my big worry about artificial intelligence, that it will be another thing that we are conditioned to accept without saying, what can we learn from these other technologies and the truly soul crushing ramifications that they've had in our lives?
Sarah [00:12:05] Are we conditioned to accept it or do we just adapt? How do you want to frame it?
Beth [00:12:11] I don't think we've adapted to the circumstance of lots and lots of people being glued to TVs all day. And I don't think that we've adopted to that. I think we are suffering the consequences of it constantly. When I hear adaptation, I think of it as like a positive way to overcome those harmful effects. And we have not done that with a lot of the technologies that don't even feel like technology anymore.
Sarah [00:12:38] See, I think of adaptation just more neutrally. I don't think it necessarily has to be positive. I guess we'd have to call up old Charles Darwin. I'm sure there'll be an AI version of him sometime soon and we can ask him. Probably already out there. That the adaptation forces us to say what did we want? Do we want to just go back? Because I think that's the tough part of adaptation. Sometimes the narrative just becomes, we want to go back. Well, back to what? Because I would think that you say, okay, well, we haven't adapted, but do you want to be Amish? I think not. There's this tough part of like, okay, but what are we trying to get at that we didn't adapt well to? Listen, I love TV. I grew up on TV. I definitely think the technology of books has been nothing but positive. And TV, even to the point like forget the senior citizens, how about the fact that we have a reality show president, right? I get stuck sometimes where I'm like, am I just trying to make America great again or what are we missing? I really try to push myself. I think all the time where you say like that study where it's like when people say things were good, it was like because they were eight. They're defining it because they were eight and they didn't really understand what was going on. There have been costs of television. There have been cost of the internet.
[00:14:06] Things changed in ways that maybe all of us would rather go back to how the way they were before, but we can't. And there have also been benefits. And so it's hard to weigh all that I think especially when you're talking about artificial intelligence, because the biggest difference to me is the speed. One of our listeners, Hannah, sent us this message about how she went on maternity leave, and when she left for maternity leave her company was, like, we don't touch AI. And when she got back, it was everywhere. Maternity leave's not that long. To me that's what continues to shift with every incoming technology that we're asked to adapt to. It's the speed at which we are asked to adapt to it. And that's not something I think that we actually do get better at. I don't think we do for whatever reason. I see some glimmers we've talked about that I think it's positive that there seems to be so much study about what AI does, that it didn't feel like that happened with social media until way late in the game. But I think it's just happening so quickly that we have trouble finding the time and the space and the critical analysis and processing power to go, wait, are we just trying to go back? What do we want to keep? What do want to improve? What do we want to get rid of?
Beth [00:15:33] That's what's bothering me about it. It is the speed that is bothering me. I am not a doomer. I really don't think the robots are going to kill us. I have been very slowly so that I can really kind of sit with it and marinate it reading Super Agency by Reid Hoffman, which the subtitle is what could possibly go right with our AI feature. Sometimes it feels like straight up propaganda for AI to me. But I'm reading it because I want to engage with the positive aspect and the possibility. I am overall a person who loves thinking about the future and loves thinking about how life can be different and better and how we can use tools to achieve that. What is bothering me is that I already feel in both big and small ways the push toward inevitability and the acceleration. It seems as though, even as we learn more about the negative effects, the message keeps getting pumped. It's coming, it's coming, it's coming. Maybe it does poorly impact your critical thinking skills as some studies are starting to show us. But we're going, "But you're going to be left behind if you don't know how to use it." And that's what's bugging me because we can't recognize all of those negative ramifications until down the road.
[00:17:02] It took a long time, I think, for us to realize. And we're just now seeing how different demographics relate to television, what television means to different people and different age groups and how a lot of the problems that TV really creates are around senior citizens. There's this really significant cohort that are impacted differently by television than people my kid's age who barely think about TV. TV is just the box, but they think about streaming services and it's just a completely different thing. So that's what's bugging me about AI. I want to make sure that we learn the lessons of everything else, not that we're Amish, not that were rejecting it, not that that we are going backwards in any way, but that we still have decisions to make about it. Even if you believe in a future where the most critical skill my children could have in the workforce is crafting good prompts for AI, which I hate thinking about that future, that doesn't make me very excited. But if that is it, that still involves a lot of base knowledge that I don't think they can learn by interacting with AI on a daily basis right now. So I just want us to be able to choose those things. I'm concerned for Hannah's company that they've made such a quick jump into it when I think there are lots of on-ramps and off-ramps along the way and finessing and corralling to do.
Sarah [00:18:35] Well, to the economic anxieties of it all in industry applications, when you're talking about something that everybody agrees is pretty good at administrative tasks, meeting summaries, helping with a lot of information, managing a lot of data, it makes sense to me that it's going to roll out really, really quickly in a place that is motivated to increase productivity, which is going to be our jobs, right? Like that makes sense to me. And it also makes sense that it triggers all this anxiety that it's inevitable. Am I being forced to use something that's going to replace me? It's all happening so quickly. I can't even think through that. There's so much of this that makes sense. I understand that it feels like this train we can't stop. And also like the more it plays out, you hear people realizing that I don't want to use it because I think it will replace me, but the more I use it, I realize it's not going to replace me. There's a little bit of that going on. If you read people's comments and how it's playing out in their jobs. What artificial intelligence, this massive technology rolling out at this point in my adult life is forcing me to reckon with, is that for so long I think I thought about my life, our country, the globe, as a problem we were trying to solve and what it really is it's just one long rolling experiment.
[00:20:34] It's just a constant experiment. And sometimes the best, highest use of my highly evolved brain is just observation. The idea that we're going to get to a place where we can stop it or analyze it or work out the problems, I have to let that expectation go. That is never how technology is rolled across the human race. And I don't think this is going to be any different. It's going to roll out. It's going to cause shit shows. It is going to absolutely eliminate entire industries. Those anxieties, those economic anxieties about the internet were well placed. They were well placed. I saw Sunset Boulevard on Broadway over the summer, which if you're not familiar with the story is the story of Norma Desmond, a silent movie star who is now floundering or her career has floundered as talkies have rolled out. This very much like it eliminated me technology. And I kept thinking about that. And I just thought that's just it. There's no problem to be solved here. This is what this means to be here. It means as you get older, as the technologies change, as the world changes, blockbusters disappear, but distribution centers appear.
[00:21:57] And sometimes you make less money at the distribution center than you ever would have made managing a blockbuster. And that sucks. And maybe we can get at some of that in the long run, but it's like getting in front of that rolling experiment. I think I've always in my mind had this visual of us taming the monster or mastering the waves. And it's like, no, no, no, that's not how it works. You got to learn to surf and your ass is going to get taken out by waves. It's going to happen. It's going happen. There's going be horrific applications. I am very worried about entry and mid-level jobs. And I'm worried about how you get to-- I can't even compute. I keep losing using these technological terms. I can't compute how you skip. How do you get to the expertise if you don't ever enter at the entry level or mid-level job? But we're not going to go as the humorous go, okay, well we see this problem, everybody stop and we'll figure it out. I got to let that go. That's never going to happen.
Beth [00:22:59] I find that incredibly depressing. I think that there is truth in it, but then I don't know what we're doing. I don't know what we want to save all this time for. If we are reducing business to my AI agent sending emails that your AI agent answers, I don't know why we have those communications. What I hope-- and I know this isn't going to happen on a grand scale, but I hope in the places where we have influence and control, this takes us down a path of really asking what are we trying to do here? If I want to use this because it saves me so much time, what do I think is a good use of my time? If we want to give these tools to doctors because it helps eliminate their burnout by drafting first drafts of their notes from time with a patient, which I think is a great application for this, perfect. Are we sure that we really need that report? And what is important about that report. That's where AI is used well when it prompts a bunch of questions around what you're trying to generate in the first place.
[00:24:14] My most successful uses of it in my own work are when I ask it something and it gives me back something that it makes me realize that I'm asking the wrong question. This is totally not what I want. This has nothing to do with actually what I'm trying to pinpoint. It's nice in that way that maybe a really good colleague would have been nice when I used to work with people in an office. And I would like to ask those questions too. Why am I using this instead of interacting with another person? And is there actually more value here or is this like a good stand-in for another person. And I think that, again, depends on the use case. I just want us to be careful and caring in our use cases. I'm not trying to stand in front of the tide and stop it as much as say, but can we acknowledge that the tide is coming and maybe direct it in certain places and protect against certain impacts of it? Because we still have agency, and agency is a word that this is going to change so much. I'm reading this book, Super Agency. I don't want to lose what it means to be human and that I still have decisions to make. There's still another day. And some of the inevitability talk from the pro AI crowd makes me feel the erosion of that agency in a way that I really do not enjoy or respect.
Sarah [00:25:49] There's a lot here that I hope-- like when you use the email example, excuse me, yes, I would like to have all the time in my life that I spent on email back. So if an agent wants to do that, and then we decide it's not worthwhile, great. That is not even Goldilocks. That is like princess pick application of this technology. And that's what I see starting to bubble up. Is it feels as if artificial intelligence is going to push the bad uses of the internet technology so far that it's going to break some of it. And some of that I think needs to get broken. Like when the internet rolled out, we were worried about our data and our privacy. We were right to be worried about that. AI is going to accelerate fraud (already has) and deep fakes and scams to such a degree that it feels like it could just like hit rock bottom and break and find something new.
[00:27:03] And what made me think about that and through that sort of lens of like okay maybe this will just get us out of this doom loop that the internet has created, is people were talking about like it's just going to get to a point where people aren't going to be going to websites. Like you'll be on AI, you'll asking the question, it'll go to the websites. It could eliminate the need for websites. If artificial intelligence as our online agents get to a place where I never have to log into an account again, as long as I live, the level of rejoicing I will have cannot be contained. I hate online accounts. And everywhere you go and everything you do, we've just piled it on and piled on until we need password managers and blob. And if we get to a place where if password managers as an industry dies, I will not cry a single tear. Because our online agents are taking-- like I see a lot of anxiety about how this could play out to the bad parts of the internet, but there's a part of me that's like, yeah, but maybe it'll just break it and we can just find a new way of being in the online world in particular.
Beth [00:28:23] I would also love to never enter a password anywhere again, as long as I live. And any strides toward authentication and security that get us out of the current hellscape-- right now, every time my daughters need something, it results in a password search for me.
Sarah [00:28:42] I hate it so much.
Beth [00:28:44] And I just despise it. I do.
Sarah [00:28:45] And that, we didn't anticipate. Because there will be artificial intelligence stuff like this, too. It's not like we were worried about the internet somebody was like, oh, and we'll spend all of our lives on online accounts, chasing down our passwords. Can you fathom how much time we spend on that of our one wild and precious life?
Beth [00:29:02] Yeah, it's terrible.
Sarah [00:29:02] That's the thing about artificial intelligence we're probably not anticipating. Like what is that going to create that we will then have to manage? Because it is inevitable with every technology there is a human management component of it.
Beth [00:29:16] And that is where I feel excited to learn more because what I feel like I have the best understanding of right now are just large language models. They've taken in a whole bunch of data and they get really good at synthesizing it and spitting it back to you in different forms in response to precise questions. Okay, so I do see an end of that road where eventually, and we're getting there fast, where it is just slop upon slop because the AI new data coming in has been created by the AI, and so everything has this weird tone and it feels just a little bit off and it's too sunny and optimistic in some ways and then hallucinating off in a different direction. If I don't know it just makes shit up.
Sarah [00:30:02] Or it's hard on itself! The way it's like, "I'm a failure." That is weird!
Beth [00:30:06] I'm so, so sorry that this has happened. So I see the end of the large language model road right in front of me. I have almost no experience with the forms of artificial intelligence that mathematicians, scientists, physicians can access where you're really taking this incredibly enormous data set and getting better at predictions and your modeling. And the applications that make me most excited are that way outside of my own experience. And I'm trying to figure out if I can be more excited about those because they feel not threatening to me at all. They feel like only upside to me. Or if that really is where the true possibility lives. And the parts that frighten me the most are in this world that we are in where you can read a story that looks like legitimate local news and it is almost pure fiction. And where any jackass sitting at home who wants to stir up controversy can post something that leads to terrorism and violence. That's the part that I have the most fear about and where I do want some guardrails. I don't want to just accept that this is the way things are going to be now, and we can either have this or be Amish. That feels all wrong to me. So I'm trying to find how do we calibrate around the things where I think we have legitimate fear, but unleash all of that amazing potential that does seem to be just right at our fingertips.
Sarah [00:31:40] Yeah. This is just going to cure my kid's diabetes and I'm not mad about it. Let me just say that as politely as I can. I'm not worried about the biomedical, ethical implications of CRISPR and any sort of artificial intelligence applications to gene editing otherwise. Or just not bring it, we'll figure it out. I do think that instead of guardrails, I do, politically, to the misinformation, to the news environment, feel like it's already happening. It just breaks things and everybody goes, no. Now, civically, am I worried about the fact that people are just going to just trust everything and they're just going to go, no, of course I am, but we are adaptive and there will be other ways and people will figure them out. And what I hope, like I said, is that it just will turn people away from technology as a way to engage in the world. Politically, I can see a path for that. I see can a path where people say Facebook, TikTok, YouTube, even podcasts, that didn't get it done. We're going to have to like roll up our sleeves and artificial intelligence is certainly not going to solve our civic culture. There's not even a lot of optimism around that.
[00:33:04] I don't hear any of the biggest tech optimists being like, this is going to really fix our partisanship. I feel like we've like played out that doom loop so fully with social media that I don't see us completely susceptible to another technology as a quick fix. It feels like people are going to go, no, we have to figure out another way. And we are adaptable and I do believe that we will, not without risks, not without costs, not with that loss. But that adaptability works both ways. And I think that there is just this market component to everything we've talked about so far like the jobs and the data and the biomedical ethics and the political implication that it's just going to have to get worked out in the process, which I think is what triggers so much anxiety because you know that there will be real risk to do it that way. But as much as I would love everybody to like future problem solve it, I just don't think that's going to happen. I think we're just going to have to roll the dice in so many ways and go, we didn't like that. We don't want to do that anymore. And that is human existence. That's what I was getting back to. People get swept up in the winds of history and some people come out winners and some come out losers. And it's going to be true for artificial intelligence as well.
Beth [00:34:24] There is such a scummy quality coming out of Silicon Valley with this though. When you read about how AI companies promise all of this productivity, all of this sufficiency, all of these time savings, it will replace your human workers, it will replace the need for X, Y, and Z, but with their own employees, they're talking 80 hour work weeks, six days a week. We are relentless. You must be here in the office.
Sarah [00:34:53] We need the best humans; we'll pay them billions of dollars.
Beth [00:34:56] This is a raw deal. And I do want to be attuned to that. Not only the possibility of the technology, but the motivations and the character of the people putting it in our faces. That's how I feel about EdTech too. Whenever we talk about the technologies being rolled out at schools, I don't want to just hear the propaganda about how good those are for our students. I want that to be a decision point that engages the community as well. How much time do you want your kid on a screen at school? Is this what learning means to us now? Are we on the same page? Just here within this school community, are we on the same page about what learning means and the role that technology has to play in it?
Sarah [00:35:34] I think we should get into it next. Let's take a quick break. Let's talk about the bigger societal stuff. We've done economic, political anxieties. Let's talk about how it's going to show up in our personal lives at school and other places. How's this for a positive spin? Everything you just named, I could not agree with more. You know that no one loath EdTech the way that I loathe EdTech, except for maybe you. Maybe our mutual loathing is a fire that could fuel the world. There's a positive aspect to the speed at which this technology advances because it's not like we're generations removed from the lessons of the first round of EdTech, right? We're like a year away from when all this EdTech rolled out in COVID and we're looking around going, we hate this. Everybody hates this. This sucks. And so, to me, there's a little silver lining there. It's happening so quickly that we can go, we just saw this with internet EdTech, we don't want artificial EdTech.
Beth [00:36:49] I hope that's true. I feel a little bit less optimistic about that after preparing for More To Says on school attendance because what I heard from teachers is that everybody hates it but they also don't want to go back. They do want their kids to be able to log into their Google Classroom and get whatever they might have gotten at school for the week. Because it is conditioning, right? Like we are simultaneously being exposed and conditioned and that is the problem with social media that we have finally I think come to understand in a lot of ways. It is not about if my kid wants Snapchat, it's not like, do I think you're responsible or even do I think you're safe? It's also how much can your brain stand up in this David and Goliath battle for holding onto your own attention?
[00:37:37] And Golioth is just going to win that battle because it has studied the brain and understands what needs to happen to you. And I just think in addition to having all this possibility around the technology and what it can do, we have to recognize that these are still industries and these are very sophisticated people who read the political winds too and exercise an awful lot of power. And I think another reason I feel so resistant in this particular moment is that it looks to me like in every arena, technological, industrial, economic, social, political, that David and Goliath is completely one-sided, that the tech companies have the foothold everywhere.
Sarah [00:38:22] I don't know if it's conditioning or if it just the natural psychological reaction that we do. There's this like fear of missing out. That we pursue comfort, even though we know that the pursuit of comfort is not happiness. On our spicy I talked about the stoics. They say this all the time. You're going to want to pursue comfort. That's not going to lead to a good life. The pursuit of comfort doesn't lead to good life, but also it's just what we do. There is this aspect of-- I'm reading War and Peace, a slow read with footnotes and tangents. And a lot of the book is building to the Battle of Borodino, which was the deadliest battle in the Napoleonic Wars. Like 75,000 people died. And they get to this spot-- and it's the scenario we all want. It's a scenario where we're looking at this side and we're looking at the side and were like this is going to end badly. And these commanders had total control. It's like Donald Trump's wet dream, to go, nope, this isn't going to play out the way we want it to, a lot of people are going to die, nobody's going to win. And they didn't do it. Like, they didn't do it.
[00:39:33] They had the power to do it. They had a power to stop it, and prevent all this carnage. They knew it wasn't going to turn out and they didn't do it! I just think, psychologically, humans it's like we hunger for that level of control and we just have no capacity to claim it. Especially on like this grand scale. But again, you take that, you flip it, that is true of the tech companies. Of course, they have enormous power. Of course, they are the Goliath. But we all know how David and Goliath ended. You overplay your hand, you gamble, you put too much on the line you cause enormous damage or harm or violence and people go, gross, enough. And maybe not enough that like we're done. But we were on a conversation and somebody was like, "I think TikTok's over. It's been shitified. Nobody wants to be on there." Like it's this constant churning and this is not going to be any different. It's just so easy to talk about at the beginning that this is going to be shit and we're all going to die. But it's like we don't even have a good handle on what happened before. We're still sorting out how the internet rolled across our lives. And we accurately predicted some of that, inaccurately predicted some of that, still don't understand some of the things that happened. And we're trying to tell the future? I don't think so.
Beth [00:40:57] I don't think it's trying to tell the future though to say, hey, we're still learning about these things. What if we slowed this down a little bit? What if this train is moving a little faster than we can contend with and we would like to actually contend with it? I really liked we got an email from a long-time listener whose name is also Beth, who said every time she's hearing a conversation about AI now she says, okay, at what cost? And I think that's a really good question. You used it for this? Great, what was the cost to you? We heard a lot of people expressing concern about the environmental impact of AI, which I think shouldn't be lost in the conversation. I think we'll figure that out. That does feel like a solvable problem to me. I think the entire future of humanity depends on more cheaper, reliable electricity. And I think that we're figuring that out and that we'll continue to figure it out. But it's real, there's a cost to that too. And electricity always, even renewable electricity, takes a lot water, a lot infrastructure, a lot of space. Like land is required, there is a cost to everything. And so I think that's why I'm so focused on decisions because decision points where you sit down and really contend with those costs, then you can say, okay, this is a cost that we're willing to pay. This one we're not. How do we innovate around that? Or what do we say? Well, we'll sacrifice this because it's not worth the cost to us.
Sarah [00:42:20] Yeah, I've been trying to think about this through the lens of my own usage, because I hate phones. I've had this long journey to get away from my phone. I don't love the internet. I've pretty much stopped engaging with social media. And then these technologies rolled out, and I was like, yeah, sure. And I continue to experiment with them. I do think my anxiety has lessened like what I said with the economic. I was so worried they replaced me, and the more I used them, the more realized they're not going to replace me. They don't deal with complexity well. They mess up a lot. I think that there are absolutely real costs. I think probably one of the biggest spaces for my anxiety is that pursuit of comfort in relationships and how AI can really, really solve for that. If you'd like to have interactions that don't involve any anxiety, which almost all human interactions do, AI can fix that for you. And I have a lot of worry about how that's going to play out in people's lives. I just think it's going to play out in people lives though. I think that's what I've just released. I don't think we're slowing it down. I think we will only learn about the cost by seeing them in people' lives. And not the only way, but the most impactful way.
Beth [00:43:39] So are you anti-regulation?
Sarah [00:43:43] With this Congress? With this politics? Yeah, I don't think they--
Beth [00:43:48] State by state?
Sarah [00:43:50] I think this would be a really hard thing to regulate state by state.
Beth [00:43:53] It doesn't bother me at all to talk about age constraints around this usage.
Sarah [00:43:57] No, but I think that that's like where the conversation's going around the internet and social media generally.
Beth [00:44:02] And I think it needs to. Because this relational aspect and the effect that it is having on adults, it really, really concerns me for kids. Really concerns me. There's been a lot of buzz about Jim Acosta. He used the word interview, doing an interview. I'm very uncomfortable with calling it an interview. I would say Jim Acosta filmed an interaction with an AI chat bot created using inputs about Joaquin Oliver.
Sarah [00:44:41] By his parent.
Beth [00:44:42] By his parents, who is one of the students who was killed in Parkland. And his parents have, I think, a lot of complex reasons for wanting to create this version of him, that's how they would say it, and put his voice out into the world. So I watched this conversation. And again, conversation doesn't feel quite right to me because this is a large language model interacting with a user. And so, the interaction felt so artificial to me. I don't know this person. I'm sure that the information being transmitted was correct. I watched his dad follow up with Jim Acosta about it and it was. He really liked Star Wars. But the conversation doesn't go like, "I really like Star Wars, remember this episode when this," it goes, "I really love Star Wars for the struggle and the justice themes in it. What is your favorite movie, Jim?" And everything kind of goes along that track. And so I watched it and I thought maybe let's not do this anymore, guys. And then I went to the comments on this post that Jim Acosta put on Substack. And so many of them were like this made me cry, this was beautiful, what a great way to experience this young man. And I want to be careful talking about this because I don't want to add suffering or pain in any respect to anyone who has lost someone, especially in that way.
[00:46:22] But I just don't want that form of interaction to be so prevalent that that's how we start to interact, where we lose the weirdness of humans and the thing that feels personal and the things that really connects and the think that sometimes makes you not ask the other person what movies they like because we're not trying to keep the conversation in this 50-50 range. There was just so much about it that alarmed me and I watched that the same day that I read about how police forces are using AI. So the companies that make body cams will transcribe that footage and then generate a draft of incident reports for police officers. And they spit it back to the officer with fill in the blanks to make sure that the officer actually goes through and reads it and does some editing because everybody recognizes that especially when you're talking about policing, accuracy is critical and a lot happens that cannot be captured in a transcript. Did the person shake their head yes or no? And that did not get picked up. What was the tone of the way they said it? What was their body's posture? Were they physically aggressive as they were saying this or not? And the AI isn't good enough to do that stuff yet. So putting those things side by side, I just thought, man, I really hope that this is what I mean about fundamentally altering what it means to be human. I hope that this technology starts to act more like us that we don't start to act more like it.
Sarah [00:48:07] I think that we have millions of years of evolution on our side and we're hard to change. So I think it will affect our behavior. I'm not worried about a quick evolution of our actual brain. That sweet family should really take a trip through WandaVision. Recapturing the person you've lost is not the way out of grief in my personal, philosophical, moral, or ethical opinion. And also to that pursuit of problem solving, maybe that's the paradox. We pursue improvement, which inevitably leads more comfort, even if we're not actually solving the problem, if we are even creating more problems. It's hard to argue with over the course of humanity, we have made our lives more comfortable. More people survive; they live longer. You talk about what kind of lives they live, but a life in 2025, almost anywhere in the globe is more comfortable on the average than a life at any other point in human history. So we pursue this, the solving of the problems, the improving of the work conditions and the environment and the politics and all this and what that inevitably leads to is the removal of the problem we're trying to solve. We're removing speed bumps, we're removing challenges, we're moving issues, we're remove risk and violence and oppression.
[00:49:50] And the paradox is that as we do that, we either create other ones or we're like the snake eating its own tail. Like we're trying to make ourselves more comfortable but in making ourselves more uncomfortable, we're getting out of alignment of what we're actually supposed to be doing here. If you believe there's something we're supposed to actually be doing here. That's not what I'm trying to release. Like that was weird. It comforted some people. That would probably get better. There will be kids that will lose all touch with reality in real life because they have been consumed by the relationship they create with artificial intelligence. That's going to happen. I cannot fathom a scenario. I don't care if everybody woke up tomorrow and Pete Buttigieg was put in charge of the most reasonable, logical, critical analysis, artificial intelligence regulation on planet earth for everybody. It's still going to happen. I just think that that train has left the station. And if we're looking at all of this and we're looking at like, there is an inevitability, but it's not one we're going to anticipate. It's not that we can see clearly now. I don't care if you're Sam Altman. What does that mean? It's almost like we don't need to solve the problems, adapt to the technology. We just need to keep evolving and improving our abilities to deal with that. And I think that might be a personal pursuit as much as it is a societal one.
Beth [00:51:24] I don't think it has to be both. And all the layers in between. And I may be misinterpreting you. I think what I feel you saying in this conversation is that I'm being naive by wanting to try. And I want to try. I want to try. I think the most positive thing that we can be doing on a personal level and in our immediate groups right now is asking those questions, what do I want my time to be for? What is a good use of my day? What am I trying to feel in my relationships? And what should those relationships consist of to help me feel that? I just think that we have an opportunity here to really redefine or not redefine, but get more precise about the conditions of being human and what makes that purposeful or pleasurable or worthwhile. Or we can be swept on this tide and let other people and at some point, not other people decide those things for us. And in a world of super agents, my goal is to try to maintain agency.
Sarah [00:52:52] I'm definitely not calling you naive. Nobody loves to try hard more than I love to try hard. I think what it is, is that the internet in both good ways and bad has made that personal pursuit so easily translatable into we all need to. It's not I need to work on it. I need to focus on this. Me and my community needs to focus on this; me and my family are asking these questions. It's, well, now that we can all feel like we're having a conversation on the internet, we'll all decide together. And I just personally when we're talking about anxieties about artificial intelligence and the world otherwise, I have got to let that go. I have to let the idea go that we're going to get around a table and figure this out together. It's not working out the way I had hoped. And I have to let go of that expectation because when I hold it and I get on the internet and I read the headline or I start thinking about the data centers, one of which is coming into my personal community, then I can just feel this like I can see the problem happening and it's going to cause all this damage and I need to fix it and we all need to get on this same page.
[00:54:09] And if you're not on the same page with me, then you're the problem and you want all this harm to happen to me and my family. And you're the devil. It's just so easy for me to spin up. Whereas, if I say, I'm going to handle what I can handle and I'm going to approach this the best way I know how and other people aren't and other are going to exploit it and other people are going to use it for the worst possible applications and purposes and I can't prevent that. I have to let that go. Doesn't mean like we societally or legislatively can't try. I'm not ready to like give up on governance for artificial intelligence or otherwise. It's just like the level of expectation I have that sort of services my anxiety, I've got to adjust that. I've got to just like take that down a notch or two, especially with something as big as artificial intelligence.
Beth [00:55:03] And maybe this is just a personality difference because I don't ever feel like personally responsible for the whole of anything. I'm not lying awake at night thinking, oh my God, what are we going to do about AI? I am really trying to be critical of my own use of it. I'm trying to read the book that pushes me in the opposite direction of my inclination. I'm trying to read the doomer stuff too because that's further in another direction. I want to learn about it, I want to understand it, I want it to be conversant in it, I want to understand how it works. Like when you said personality at the very beginning of this conversation, I thought, I've been noticing that as well and trying to, instead of thinking personality, think configuration, just to remind me what this is.
[00:55:51] And that personality comes across because those were a set of decisions made on the part of the company that trained it. So I want to just keep a really critical eye about it. And I do want to see it regulated at a lot of different levels, because I think that that's how you keep competition in it as well. I think this is one where we want California to maybe be out in front of the rest of the country, especially since that's where a lot of these companies are, but where different states ask different things and they have to be responsive to that. Because a lot of it's the bigness of it that I think leads to a lot of this anxiety, but there are ways to constrain the bigness of it. And that's what I would like to see us do without sacrificing some of that life-saving, suffering-preventing potential that it has in particular applications.
Sarah [00:56:46] Well, this is just the beginning of the conversation. We wanted to fully exercise all these anxieties before we could move on to more and different conversations about the promise, the risk, how regulation could play out with this technology. We want to hear from all of you about how this is showing up in your lives and jobs and institutions. So email us at hello@pantsuitpoliticshow.com or send us a DM on Instagram, message us on Substack. We want to hear from you. Beth, you recently wrote about the anatomy of my burnout on your personal Substack Thoughts and Prayers. And I was struck by how, once again, you and I are a yin and yang. You wrote, "I have leaned in for my entire adult life. It's time to step back a little." And it struck me because I have had the exact opposite sensation over the summer, which is that it was time for me to lean in. You talked about how you're stepping away from your church board. I'm stepping up. You're leaning back from the school environment. I'm out here starting the high school PTO back up again. It was fascinating, but also interesting because the broader conversation in your post about this I was like, yeah, no, that's where I'm at, too
Beth [00:58:04] I think that what I am really learning about myself in this season is that, one, it is about a particular season. This isn't going to be a philosophy for the rest of my life. It's just right now. I need to step back from roles that have a lot of formality attached to them. I think that I experienced so much bureaucracy from a very early age. And I mean going back to like elementary school. If you could be the president of something, I was. If there was a committee, I was on it. All through college I was managing things. We've talked before about our college experiences being so dramatically different because I was working. I was functioning like an executive in a lot of ways during college. And I've my whole professional life done board service and committees and leadership positions. And it served me really well. I don't regret a single bit of it-- maybe a little bit, but not much of it.
[00:59:07] But at this point in my life, I need a little more room. So it's not that I don't want to be involved in things, but I want to involved in particular projects that have a beginning and a middle and an end. I want to be a little freer to decide what I spend my attention on. Because in so many ways in my family, I don't feel that freedom right now. We both have older kids and it is clear to me that we're in this window with them where they have a lot to talk about and a lot that they're thinking about in these deep inner worlds. And I can see how easily they could disconnect from me into those inner worlds. It's not like I'm trying to be in their business all the time, but I want to free up some of the brain space that I have that is just consumed with like, well, what is this person on the board going to say? Or what's the agenda for this meeting going to be? That bureaucratic chatter that has always been part of my internal tape, I just want to free that up so that I can dial into them a little bit more and into myself a little more for this particular moment in my life.
Sarah [01:00:20] Yeah, that's the area I'm most stepping up. I don't want to freak out to people who have littles because I know that is a physically intensive phase of parenting, but y'all, it gets so real when they're teenagers in high school. They need their own executive assistants. They require so much project management. It's wild out there. And I have a kid that's playing a sport and just the level of coordination and attention and just ongoing management. Did you check this? Because I grew up in a household where there was a high functioning, single girl child, me, I needed very little management. And there's just a baseline additional management when you have three children. There's additional management when they're all these different ages. They have different abilities and focus. And my kids are so competent. Especially my oldest two, I think it was easy for me to think they've got it, just like I had it. And it's not that anybody had this big giant stumble that I was like, oh my God, they don't have it. I just realized like, no, they need more help. They need more health than I was providing.
[01:01:37] And I think some of this was diabetes. Like it just consumes so much of our processing capacity and we thank God have gotten better management tools and that more under control. But just realizing like, no, everything needs my attention. And I think there's just this sense of this generally. Like in our institutions, I feel this vibe of people are ready to step up again, ready to fight for things, to get better, to improve, that there are long-term issues that have just absorbed a certain level of complacency. And I'm just done with the complacency. I think I felt some complacency around my older kids. Like your kids get stuck in time. There's like a phase of life and you're like this is how they are. And then you look up and you are like, whoa, that's not who they are anymore. They're not kind of chill-- not that middle schoolers are chill, but you know what I'm saying? You forget that like they're just constantly evolving in the way in which they need you constantly evolves.
[01:02:46] And I think I just had a big moment of checking on that and realizing I have a lot of capacity. I also think I, for better or for worse, just absorbed a lot of that self-care narrative after and during COVID where my expectations for how much time I should have to just sit and chill out got out of whack, especially for somebody with a business, a full-time job, and three kids. I don't know what I was thinking. I think I really internalized this narrative that I had a lot of needs and I wasn't on the list. I don't know, but I just woke up one day and was like, you were the girl who would leave your dorm at 7 a.m. and do all that stuff and come home at 8 a.m. and you loved it. You loved firing on all cylinders all day long. You can still do that. And believe me, I have needed to do that a lot recently. And I'm fine. I'm not exhausted. I'm not overwhelmed. I have moments of real stress, including when we were recording on Monday and the children are not in school yet. But it's like, I'm fine. I handled it. I got through it. And just reminding myself, like, you have capacity, you're tough. You can handle all this. You have an incredible support system. You have all these people around you. Your kids are amazing. Who wouldn't want to be you? These are great problems to have. These are great things to tackle. And just embodying that instead of absorbing that cultural message of just isn't everything hard, you need a break.
Beth [01:04:34] I have never absorbed the cultural message of things are hard, you need a break. I have absorbed, especially during COVID, things are hard; you need to help. You need get in there. And I really did during COVID in a huge variety of ways. Again, no regrets. And I am tough and I can do a lot. And I still will do a lot. It's not like I'm sitting around twiddling my thumbs, ever. I have to schedule my time to read. I'm busy and I will still be busy. What I wanted to do with this post was examine for myself, like, where are the places where I'm because I want to be and because it feels purposeful and because I am actually accomplishing things? And where am I just busy? And a lot of those more bureaucratic roles-- which I will return to, I love them in a way. I actually love being on a board. I especially love getting to know a new organization and its people and its particular problems. I love when someone says, can we get together? I just want to pick your brain. Oh my gosh, yes, please pick my brain away. I love it. Tell me all about you. Let me know how I can help you. I still want to be a really engaged person in the world. It's just for me about turning the dials in this season to say, do I want to be an officer for this particular thing right now? No.
[01:05:58] When I coach the academic team, which I am going to continue to do with Chad, do I want to go as hard at that as I have in the past, or do I want to kind of experience it in a lighter way? There have been a number of times where I've heard myself kind of resentfully being like I feel like I care about this more than the kids do. Well, that's a choice. I don't have to. If I'm feeling resentful, it's about me. It's something that I am screwing up on. So I'm just trying to think about how do I approach things? How do I manage my time instead of letting my time manage me in this time where I inescapably am the executive assistant and chief chauffeur to both of my daughters? How can I make sure I'm enjoying that instead of feeling like, oh my God, I've got to dial into this meeting while I'm driving them wherever. But then I'm not hearing about their thing. So how am I going to hear about their things? I just am trying to hold it a little bit more effectively right now. So I don't know if the difference is as dramatic as maybe it sounded from the post, but I really respect the way that you're stepping into some of the things that have been problems and saying, I want to gather things up around this. And I hope that what I'm doing is freeing myself from always being on the problem solver side to being able to say, you know what, this problem is where I'm going to give some energy right now, or this is a person I really want to support as they do that.
Sarah [01:07:24] I think there's something really valuable there about moving from a place of constantly validating stress to challenging our framework around there. I had a lot of resentment and stress around the after-school hour. It's really hard when all my kids come home at once. I'm the only one here every day feeling like I had to put all these pieces together and making sure it works and managing diabetes and homework and this, and this, and that. And I was with my therapist and I was talking about how this time is so hard. And she goes, "You'll never regret it. You'll never regret being home during that time of day." And it was just really powerful to just go, "Hey, what a problem to have." Just think about it differently. You have the power to just go, "Maybe this isn't the problem I think it is." Or the power to say, no, I do have regrets and I want to change things now. I can't go back in time. But I can do things differently now. And I can see this as a challenge that I'm up for. And I just think there's a lot of space for that. I feel a lot space for that in my life. I'm hoping societally we feel some space to go, no, I can handle this. Like, I'm tough. I can take on this challenge. I can face this obstacle.
[01:08:41] There's nothing new under the sun. New technologies, challenging phases of our personal life, political realities that are not to our wanting. There's nothing new under the sun and people have handled it before or not and you will handle it or not. So it's just releasing this idea that we will cultivate or curate this existence instead of just surviving it or thriving within it or both on the same day. And I think the older I get, the more I can just be with that reality instead of trying to grasp it. That's where my burnout comes from, is when I'm trying to control it and hold on tight. And so I thought you're just sort of releasing it really spoke to me. Well, we hope something we've said in today's episode was valuable to you. We find your feedback always valuable and we really want to hear about how artificial intelligence is showing up in your personal lives and your work lives, your communities. So please send us a message at hello@pantsuitpoliticsshow.com. We will be back in your ears on Tuesday and until then, have the best weekend available to you.




A topic discussed today caught my notice. You both expressed concern about errors AI makes in areas in which your expertise allows you to readily recognize mistakes and misinformation. However, you then both stated confidence in AI’s application in the use and interpretation of huge datasets for use in medicine and science. To me, this is at odds and is at the heart of the issue. Trusting AI only in areas we don’t understand or that are unwieldy seems to compound the risk exponentially. With the proverbial AI cat already out of the bag, we simply MUST sharpen our critical thinking skills and develop a pervasive sense of skepticism in order to more skillfully analyze AI-driven results.
I am a pastor, and once a week, a group of pastors in the town where my church is get together for Lectionary study. I have a colleague who--any time somebody asks a question about the text--will ask Chat GPT instead of thinking about it with us and discussing it with us, and it drives. me. crazy!