87 Comments
User's avatar
Megan Watson's avatar

This episode came out while I was on vacay so I just got to it. It was very interesting to listen you Sarah and Beth’s thoughts then in light of the lawsuits that have come out about the dangers of AI to young people with regard to sexting and suicide. I am with Sarah that the bits aren’t going to take o er BUT I believe that that AI as it is now and unregulated is more dangerous than good. I’m very concerned about the impact on mental health and just bring another reason for people to not interact with others in person. I’m also concerned it’s going to continue the trend of dumbing all of us down.

And on passwords - i have been saying for years that I just want an ocular scanner - i think it’s ridiculous that we need passwords for everything from our kids school accounts to every freaking online shopping to every different medical provider we see. One of the medical provides in our area we use won’t communicate other than through the app which goes back to my first fear- minimizing actual interactions with actual people is not good.

Expand full comment
Neshell's avatar

I work with data dcientists who use information about people to predict their behavior (statistical modeling). In my industry, there are regulations as to what use that kind of analysis can be put to. You can't use it against any individual. The analysis is just an input to larger conversations about keeping the business safe. Any pieces of data we use for/against individuals has to hold up to scrutiny. If you can't explain how you made that decision, you're not gonna be allowed to continue doing it.

Another thing I think about often is how people generally want things to improve. I think about progress as a bell curve and the peak of the curve moving overtime to the right toward improved conditions, taking the whole bell with it. When we use AI, we cement the peak and/or move it to the left. It does not drive us forward, but looks for a safe middle to return to us.

Expand full comment
Liz Larson's avatar

I really enjoyed the conversation around AI. As an attorney and a writer, I’m definitely more in the put on the brakes camp. But y’all made very good points about the attempt to regulate it. It made me wonder if we wanted something more like environmental impact studies than regulation. Meaning, before you roll out your model/product, you need to do the Bioethical research into the impact that’s going to have on different groups of people and society as a whole. Something more to give us informed consent than to actually step on the brakes.

Expand full comment
Anne Percoco's avatar

I am starting my Christmas shopping here! Yesterday I gave ChatGPT a list of my 6 nieces and what their interests are and ages. I asked it to recommend graphic novels for each of them. It gave me a pretty great list!

Expand full comment
Anne Percoco's avatar

I have a small business that provides bookkeeping services to other small businesses and individuals. At the moment I am not that concerned about the entry level question. A couple of years ago we adopted a piece of technology that drastically cut our need for manual data entry. It is true that the manual data entry helped employees learn about categorization, but now I just need to teach them about categorization in a more intentional way. I think employers will have to invest more in training. And I have found other things for those employees to do with their time!

My husband has been able to use vibe coding to make custom tools for some small but repetitive jobs. What am I doing with that time saved? I am working less!

I am really looking forward to the possibility of trusting an AI agent to log into dozens of websites every month to download statements for my clients, which is drudgery. But at this point we cannot do that for security reasons.

Bookkeeping is one of the top fields that people say is going to be completely changed by ai. I think this will definitely be true for large companies that have many repetitive processes. It will take longer to filter down the small businesses that I serve.

I am not scared. I know I will have to remain flexible. But I am a smart and resourceful, and I will always be able to find a way to feed myself and my family.

Expand full comment
Alyssa Crowder's avatar

Outside of politics really hit home this week. I've had quite a few conversations with my family about reframing our use of the word "busy."

We say it all the time - "we're so busy" - and it sounds so negative and like a complaint, when the reality is that as a family we're careful about what we participate in and sign up for. So those things we're busy with are all things we've chosen! Using "our schedule is full" or "we've got lots of good stuff going on" is a small change but makes a big difference in attitudes.

Expand full comment
ColleenTX's avatar

Great podcast and discussion.

I have all the same concerns and feelings about IA that Beth and Sarah have regarding IA. I live very close to a lot of data centers in my suburb in North Dallas.

My husband is a Director of A&E company in Dallas and he is on top of a lot of the changes going on because he has to for his job. My husband also knows a lot of "Silicon Valley" types in Dallas from the years he's been at his job. I was talking about AI with my husband and basically expressed all my concerns (a lot of what Beth and Sarah have) and his response was this - "Honey. What makes you think all the people who are pushing AI haven't thought of all those concerns?" The point he was making is very similar to what Sarah was saying, which is "We are here. We have to adapt."

I do have the concerns about how AI is pushed out, especially with states and government involved or how they will be involved. I feel it needs to be regulated. How that looks, I have no clue.

Expand full comment
Renee Schafer Horton's avatar

A fun way to use LLM is to give it a weird ridiculous prompt with "definition ' appended and it will spit back funny, authentic sounding definition. We tried in Google AI "Babies never take the freeway, definition" and it was pretty hysterical.

Expand full comment
Renee Schafer Horton's avatar

Please regulate the hell out of AI. We are playing fwd like we in the U.S. always seem to do - oh, new! Shiny! Look, impt pple say I need it! Don't get left behind! - with ZERO forethought. Why are we so friggin stupid? We KNOW phones and social media are bad for all of us but esp children and yet we let that happen and now are going to think AI will be just fine? We are idiots. The U.S. only seems to work on something is already a crisis. We are losing our childrens' MINDS.

Expand full comment
Theodora's avatar

Oh and P(doom) gets me all worked up. It’s a made up statistic! No one’s value is more based on reality than Elon throwing his random numbers out there! Misuse of statistics really gets me going. Rant over.

Expand full comment
Theodora's avatar

I hate LLMs. I think they’re dumb and ineffective and being sold as something they are not. My PhD research involved a significant amount of coding. I know how to manipulate computers. I know just enough about the rough mechanisms behind AI to be dangerous. Y’all. It’s just machine learning with bigger training data sets. It isn’t sophisticated and I don’t think it is anywhere close to effectively replacing humans. Now, that isn’t going to stop companies from believing that they can save money by replacing humans with LLMs. And in several years, they will need to call all the humans back to fix the issues that the LLMs caused. I think it’s going to screw a bunch of people over in the short term, which sucks because it is totally avoidable if companies/the government had some semblance of the long game in mind. But I think that the current AI craze is going to end up being a big nothing burger. Tressie McMillan Cottom has been very influential in my thinking around this. I’m willing to be proven wrong. Maybe AI will cure cancer. I just don’t think we’re anywhere close to it yet.

Expand full comment
Renee Schafer Horton's avatar

That short term damage --- or as the post about TV said, long term damage -- needs to be paid attention to.

Expand full comment
Sarah Styf's avatar

Here's some perspective from a frustrated mom and teacher.

Indiana recently passed legislation requiring students to take a computer literacy class before graduating from high school. Because this is a new course that is adding onto graduation requirements, our district decided that all eighth graders would be required to take the course. (My eighth grader was already taking Algebra and Biology, two high school courses.) I had my suspicions about the course, but I thought I would give it some time and see what it was.

During back-to-school night, I learned the course teaches students about how computers and the internet work, and they will spend the second quarter working with AI and coding. Why? I don't know.

Because here is the high school teacher's take: kids don't need to learn ABOUT computers. They need to learn HOW TO USE computers. Over the last 15 years, my English students have become less and less competent at using ANY computer tools: word processing, slide presentations, spreadsheets, etc. They are tech dependent, not tech proficient. If our students are using AI for writing, creating, research, etc., they will never learn even the basic skills they need to do work above entry level. We need kids to learn basic skills! Not how the internet works.

There, rant over.

Expand full comment
MaryKate Hughes's avatar

One more comment…I recently read a book called Annie Bot that gets at some of these issues. I can’t stop thinking about it. Would love to discuss if anyone else has read it. The author is Sierra Greer.

Expand full comment
Abby Stanek's avatar

I found out almost 2 weeks ago that my federal contractor employer at a state university is having a workforce reduction, including me. I knew the job market was terrible (part of why I wasn’t really looking to leave my current job), but oof it is rough out here. I’m seriously considering switching fields.

I’ve done a lot of things in 10 years of employment- outreach, project coordination, technical writing (for 80ish more days). I’m considering a bit of a wild pivot to education in hopes that the job field will have less of a demand impact than my current field as a result of AI.

As an added bonus, I live and work in a fairly rural area, and my partner farms, so we are geographically permanent. So if I were to stay in my current field, I have to compete with so many(!!) people going for not enough remote jobs in my field.

Expand full comment
Jody Winter's avatar

Our small technology company here in Auckland (we make payroll software, I'm the company's only technical writer) is heavily immersed in AI and it's actually part of our KPIs and company mission and vision. I've been aware of LLMs since early 2023 (tech writers pounced on it just to check we weren't going to be replaced). Turns out, in less than two years, tech writers are vital to the success of others implementing AI in software. Our CEO regularly tells me how essential I am. Turns out that AI has no choice but to RTFM.

So, I don't use it to write for me (maybe a little if I get writer's block), rather, I use it to design 'semantic layers' and reference docs for others in the organisation to feed to AI in order to map system objects (fields, tables, functionality) to plain language inputs. This is vital for customisations to our software and specifications involving complex configuration. I'm helping others give AI that much-needed context.

Payroll in New Zealand and Australia is extremely complex due to our rigid compliance legislation around paid and unpaid leave, and how worked hours are interpreted by payroll systems. Whilst our software is 99.9% accurate, we still need to be extremely efficient when responding to complex requirements and changes to legislation. So, AI has been a godsend in this regard. We are trying to treat it as more of a colleague than a tool (without being weird or creepy about it).

Yep - I do have concerns (mostly around over-reliance and eroding of core skills), but mostly I feel lucky to have such a forward-thinking CEO who supports experimentation.

Expand full comment
Renee Schafer Horton's avatar

The over reliance. I used once with writers block: lemme just give chat GPT a go w this column that I'm stuck on --- and realized IMMEDIATELY that I would quickly become reliant on it. We go for easy not hard. Haven't used it since.

Expand full comment
Norma Stary's avatar

On my birthday in February I went to a taping of The Late Show and Stephen Colbert taped an interview with Reid Hoffman. I have never seen an audience descend into hostility so quickly, nor have I seen Colbert so outwardly disdainful of a guest. I walked out feeling far more negative about AI than I did going in. (The women next to me said they were going back to Alabama to ring the alarm bells.) Beth, I look forward to your overall impression of Hoffman's book; there's no way I'm going near it.

I just blogged about AI this week because it has suddenly come up in my job in ways I don't like. My workplace is finalizing a set of best practices for use of AI, which I'm happy to share when the webpages are published (probably Monday as I am the person who will publish them), but in the meantime my own supervisor has been encouraging me to use ChatGPT to do a few tasks on my my to do list. Not no, but hell no. In these instances, it's would be pretty lazy to do that because I would be asking it to do things I haven't learned yet. If I haven't learned yet, I can't check the output for accuracy. I have come across instances of ChatGPT specifically being incorrect in other areas of my job; for example, I'm on a committee with someone who is a heavy user of ChatGPT. She once shared an AI-generated summary of a technical document we use that was wrong because it can't take into effect how our industry uses the document.

I don't have specific fear of some of the ways scientists and analysts use tech tools, to include AI, because they have been working with giant data sets for many years and they have the expertise to do that. A lot of technology and computing power has gone to science and that's why scientific research has accelerated. What seems to be happening now is that these companies have run out of customers. There is a finite number of people who are doing research on complicated medical conditions; there are countless people who can't do these other less technical things--or creative things. They don't care if jobs or industries are eliminated as long as they're making money. This is why I don't trust them to program AI systems to be ethical. There is no bill of rights for AI users. We live in a society in which expertise has been devalued over time and I expect the devaluation to continue at an exponential rate because "anyone" can now do any number of things with these tools.

I've doubled down on my use of my own website. If I care enough about something to share with the world at large, it goes there because it's mine.

Expand full comment