Episode Description:
In this candid and thought-provoking episode of What If? For Authors, Claire tackles one of the most polarizing topics in the writing world today: artificial intelligence. As authors grapple with the rise of AI, Claire delves into the underlying fears, anger, and ethical dilemmas shaping the debate.
Claire begins by addressing the elephant in the room: the pervasive fear of AI and its potential to replace or harm authors. From the "original sin" of AI training on copyrighted material to the polarized stances authors take, this episode seeks to explore the emotional and psychological roots of these perspectives rather than taking a definitive stance.
Whether you're staunchly anti-AI, an advocate for integrating technology into your writing process, or somewhere in between, Claire invites you to approach this topic with intellectual humility. She examines how fear manifests in our behavior, discusses the scapegoating and witch-hunting tendencies emerging in the author community, and offers insights into how we can navigate this uncertain terrain without losing ourselves in the extremes.
Key Takeaways
The fear response triggered by AI and how it affects authors.
The "original sin" of AI: ethical concerns surrounding copyright and justice.
Intellectual humility and the pitfalls of entrenched positions.
The tyranny of extremes in the AI debate.
Enneagram insights: How each type might engage with the AI discussion.
Support the Show: If you found this episode helpful, please leave a review on your favorite podcast platform and share the show with your fellow authors. Every review helps more writers discover this resource.
Join the Conversation:
Where do you fall on this debate? Share your thoughts and questions by reaching out to Claire at contact@ffs.media.
Happy Writing!
TRANSCRIPT:
Claire: [00:00:00] Welcome back to another episode of What If for Authors. I'm so glad you're here. My name's Claire Taylor, and I'm an Enneagram certified coach for authors as well as a humor and mystery writer. You can check out my latest book for authors, Sustain Your Author Career, by going to ffs. media forward slash sustain.
I have a surprise announcement to make. All these episodes of this show you've listened to so far, All written, voiced by AI. Obviously, I'm kidding. And you probably knew that right away, which is telling, I think. Yes, I can rest assured, at least for the time being, that my job as an Enneagram based author coach is secure.
I've actually prompted. Chat GPT for a couple of times with Enneagram related questions and y'all it sucks. It sucks so bad As a recording this I strongly advise you not to take any cognitive emotional or behavioral advice From [00:01:00] that model just keep listening to my episodes until one day the Enneagram Singularity occurs and I can stop spending multiple hours planning out my thoughts and writing down my notes and recording each episode and simply Let an AI version do it all.
Whoa. So, okay. Yeah, I'm stalling here. And the reason I'm stalling is because I've made it through 32 episodes of this podcast without addressing this goddamn topic. And I was quite happy with that, but it's finally time to address the elephant with an unnatural number of toes in the room. Artificial intelligence.
Okay. Yeah. So this is a no win situation for me. I hope you understand that. I ask you to take a second to notice how much of the fear response you're experiencing right now while you listen to this episode. I'm noticing my own fear response in recording it. [00:02:00] So notice any tension in your body. If your heart feels like it's racing?
Are you hoping I'll take your side in the AI debate? Are you afraid of what might happen to your respect for me if I express a point of view that differs from yours?
I really do want us all to notice how much our fear response kicks up, and how our need to become extra entrenched in our position on this topic flares when someone simply mentions it. Since I'm sort of the fear discussion gal, as in, That's the whole point of this podcast. I would be remiss not to ask you to consider why you have this much fear around the mere concept of artificial intelligence use among authors.
Just consider, is that helping you? Are you even open to letting go of some of the fear you're grasping onto so tightly? If not, that's okay, but you might not be ready [00:03:00] to listen to this episode where I ask the question, then. What if AI replaces me?
Because of the way this question is phrased, I assume most of those listening to it will be more to the side of fearing AI or adverse to AI than embracing it. So I'll mostly talk to those folks, but I will also have some things to consider for the pro AI authors who might be deeply entrenched in their position that anyone who doesn't embrace AI fully, or at least to the exact same degree that they have, is.
Is a Luddite who deserves to get left in irrelevancy.
Fair warning. You will not be finding out where I fall in this debate personally. Consider my position irrelevant to the discussion because while I'll touch on some of the ethical debate around AI, especially its creation, the point of this episode is to help you understand how your fear is playing a part in your thoughts, emotions, and behaviors [00:04:00] around AI, not to tell you where the line is on no one knows where the line is, even though a whole hell of a lot of people seem to think they know exactly where the line is, and it's wherever they fall on it.
Also, it's nobody's business where I fall on AI. So you think you have it right wherever you fall on this sort of spectrum, this continuum of AI, pro or against, or else you would think something different. But someone else thinks they have it right just as hard. So what gives? Add in the fact that ethics around technology have always been a moving target.
So what you believe today will likely look different from what you believe in 15 years. I know that's uncomfortable to hear, but all of the data about how people's opinions are influenced by society points to that being the likely outcome here. And it's not necessarily a bad thing, it's just a thing.
Now, I bet you didn't expect to hear moral relativism come from a 1 today, did you? I'm leaning into that [00:05:00] 9 wing for this episode to try and help people see what they're not seeing, that the OTHER side is seeing.
There's this funny thing that people do when it comes to anxiety. Right? And this is all of us. We assume that how anxious we are is exactly appropriate to the situation. Anyone who is less anxious than we are is blind to the reality of the situation, and everyone who is more anxious than we are is overreacting and needs to calm down.
We tend to believe we've found the exact right point on the worry continuum and everyone else is too far in one direction or the other. I made kind of a joking video about this a couple years back when I started working out more. It was like, do I think that I'm better than everyone who works out less than I do?
Yes, and everyone who works out even a minute longer than I do each week obviously has a weird workout addiction. So recognizing that our brains do this, that the point we stand on in a continuum like anxiety about a situation is [00:06:00] obviously going to seem like the logical best place or else we would not be there.
But that, that does not mean that every other point on the continuum must be wrong. So looking at a situation that way, is what we call intellectual humility. So I hope we can all practice some of that for the next half hour or so as I continue to talk at you.
I've been lurking on the internet and author spaces, listening to the arguments between authors about AI for a while now. By and large, the loudest stances are not particularly nuanced, and Best self show up, which is a sign to me that there is a shit ton of fear guiding the discussion or lack of discussion.
So I won't tell you not to be cautious or worried, but maybe you can think about the situation in a new way where the fear doesn't pull you outside of yourself into distraction [00:07:00] and, you know, drive you to try and change the minds of people on the internet, which is a hobby that is generally believed to be a waste of time.
So here's something I've observed about this discussion. The people who are strongly against AI can't get past what I heard described on the New York Times podcast, The Daily, as the original sin of AI. So, that original sin is that these AI models, like Chat, GPT, BARD, so on, scraped all the written text available online, including copyrighted material, like that from which authors make our living, to create the models we know today.
There are a lot of people who cannot get past this original sin, and I think we can all agree that makes sense. So when you look at this straight on, you can see that there has been a violation of law, and seemingly no justice for it.
No matter where you fall, AI [00:08:00] debate. You can see how that basic premise runs counter to the most essential social tenants of accountability and justice. If someone feels like they've been violated, and having your work fed into an AI model without your permission and without financial compensation feels like that to many authors, then If you feel like you've been violated then you want some kind of justice and correction that feels equal to the violation.
Now equal to the violation is important here. It's why we don't just like agree to give serial killers a slap on the wrist and fine them a thousand dollars before sending them on their way. Equal to the violation is important in justice. So I don't know about you but I haven't yet seen any justice or accountability close to equal to the violation these authors feel they've received.
And that's a formula for the emotion of anger. Anger is an energy that arises in us when we feel like we've been violated or wronged. It's fuel for action [00:09:00] to right the wrong or push back in some way.
Violation without adequate justice is also a recipe for fear. Because imagine looking at all these multi billion dollar tech companies using your work without permission or payment, and you see them not receiving any meaningful consequences for the violation, and you know that probably there is no entity with enough power, authority, and motivation to accomplish anything close to regaining compensation or stopping the violations from happening in the future.
You, an artist, trying to make money off your copyrighted material, have essentially zero power, and see no one around you who can adequately protect you. So that feeling of powerlessness is a recipe for fear, among other emotions. If you're gung ho AI, and you can't pause to relate to these emotions, Others are feeling if you skip directly over compassion and empathy and go to, well, I [00:10:00] don't agree that they should feel this way, then you're not actually open to this discussion at all.
You're entrenched in your position and too scared to be open to other viewpoints. If that's the case, the best thing you can do right now is to turn off this podcast and spend a little time with that fear until it can lessen its grip on your mind, your heart, your body, and then come back to this podcast.
If we're not open to empathizing with the underlying emotions of the other side, then we're not open to a real discussion. There's no intellectual or emotional humility there.
Now, the people I see who are the most hardline, anti AI authors simply cannot get past this original sin, this sense of violation. It is incredibly difficult to move past a violation when you feel like it's never been properly addressed. We can ask the remaining indigenous people of America, or descendants of slaves, or any victim of crime whose perpetrators were either Never caught or caught, but not [00:11:00] held accountable.
I think that's a relatable feeling for most people who have empathy, this feeling of violation without any sort of repair. And when I hear the pro AI authors discuss the new technology, I don't usually hear them addressing this original sin. It's almost like it's being erased or dismissed because it's inconvenient.
I mostly hear People saying things like, yeah, it's not a great start, but let's look at the possibilities of how this can help us. To be fair, these may simply be people who prefer not to linger on the past for various legitimate reasons. Some of this might be healthy, right? It might look a little like, well, except things you cannot change and relate to reality as it is.
I think there's an argument to be made for that, especially since, yeah, you, me, our little group of author friends are not, on our own, going to stop the continued development of generative AI models and their thieving of copyrighted materials. There is no bulldozer for us to lay [00:12:00] down in front of here.
We're up against too much money. And that fucking sucks, but it is kind of the truth, y'all, isn't it?\ We don't have to like that this is how the world works to accept it enough to at least not let it ruin every day of our lives. Can you still take a personal, individual stand against that?
So there's the lack of acknowledgement of the violation that's rubbing some people wrong, and there's the sense that people are lingering on the violation and trying to ignore the reality that's irking others. So before anyone judges the people who accept that the original sin cannot be undone and are ready to look ahead to how they wrangle AI to their will, I think it's important that we take a look at ourselves.
Before we judge the people who are using AI, who have moved on past this original sin, let's pause and ask ourselves, in what ways are we also participating in the [00:13:00] systems of theft and exploitation in which we live? Do you sell your books on Amazon, a notoriously exploitative company?
Do you buy things from big box stores? Do you vote for elected officials who accept lobbyist money? Do you use Facebook, Instagram, Threads, or X to build your author brand? You see what I'm getting at here, right? I'm not telling you to not participate in any of these systems that are built on exploitation.
Shouldn't it's Almost impossible not to as an author in the modern world. What I'm saying is that we might consider the old, He that is without sin among you, let him first cast a stone at her.
Thing that we all kind of know and yeah, it's weird to hear me quote the Bible I know especially a passage about stoning women, but this is a psychologically significant concept here There are aspects of our thinking feeling and behavior that fall outside of what is called our idealized Self image in the Enneagram if something falls outside of this idealized [00:14:00] self image We tend to turn a blind eye to it and not acknowledge it as ours To make extra sure that we don't have to see the unsightly parts of ourselves to own them and integrate them, we will project them out onto others.
Those people are the ones that think, feel, and act in the wrong way. Not me. We'll take it a step further and punish other people for having those thoughts, feelings, and actions that we ourselves have and do, but which we don't allow ourselves to see. This is a well documented pattern in humans and it's a real bummer, I gotta say, because it means that those things that other people do that really bug the shit out of us might just be things that we also do but are not allowing ourselves to see and own.
So here's a little of how this works. Let's say you're so, so tired of how long it takes your spouse to make a decision. Even stuff like where to go for dinner becomes a whole process of them going, [00:15:00] I don't know, maybe here, maybe there, maybe this other place, let me think about it. Ugh, right? I can almost assure you that indecision is a part of you that exists, but that you have disowned for one reason or another.
If you look at yourself honestly, maybe get some help from a friend, I bet you can find at least a few examples of you being indecisive or taking a while to make a decision. That's not always a negative thing, but if you've decided that being decisive is the right way to be and being indecisive is the wrong way to be, Or maybe an adult trained you to feel this way at a young age.
Then it doesn't mean that you don't sometimes need to mull over a decision. It just means that you've become blind to this tendency in yourself. Shoved it in the basement, as Dr. Jerome Wagner of the Enneagram Spectrum describes it. And when you encounter it in other people, it's like that indecision, is banging on the basement door trying to get out.
So what do you do? Oftentimes we feel [00:16:00] revulsion to it when we see it in other people. You want them to shove it in the basement too, so you're not the only one suffering in that way.
Once we start to see how this pattern operates, how we become blind to parts of ourselves and then punish others for exhibiting those same parts, the next step is to take accountability for ourselves and stop focusing on trying to control others.
When I notice annoyance at someone else for, say, being sloppy, a much healthier use of my time than berating them for being sloppy is to look at my own life, acknowledge where I am sometimes sloppy, and reframing that word to something more tolerable, like, I'm being human or cutting myself some slack because I'm overwhelmed, right?
And that's accepting that I also have the quality that I'm angry at the other person for having. And then I can extend the same generous relabeling to their behavior as I've given to mine. So instead of [00:17:00] leaving feeling like, oh, this person is so sloppy and I can never work with them again. I might instead approach them with, Hey, I have some notes on this.
Do you have time to change some of these things? We might also have a conversation to see if they're feeling overwhelmed and if I could support them in some way. So instead of severing the connection, I've just found a way to strengthen it. And all it took was for me to understand that my annoyance was probably a result of my own inner disconnection, and therefore I could heal something in myself to strengthen that connection.
Now, caveat, seeing a part of myself is not the same as allowing myself to act on that part. And we could use the example of murder for this. So when I hear people go, oh that person, that murder is a monster and I could never do that. I actually feel like, really? You don't connect to that entitlement of jealousy or rage or desire to control others at all?
That actually [00:18:00] worries me when people put those things completely at arm's length, because when we keep the ugly parts of ourselves locked in the basement, they can break free sometimes and we're not ready for them.
And I think of this every time I'm listening to something like Dateline, and they have an interview with someone who goes, Oh, he would never do that. He was such a nice guy. Well, you only saw the parts of him that weren't locked in the basement. So when I recognize my sense of entitlement, which I do have.
And probably so do you in some way. Rather than locking it in the basement, I'm much more likely to notice when it's trying to operate in my life, and be able to keep it from guiding my actions. If I just accept that it's part of me, then if I maintain my blindness to it and continue to deny its existence in any way inside of me, could I murder someone?
Yes, I do believe there are circumstances that could evolve where I felt entitled to take a life. Now, it would probably be in the [00:19:00] context that they're trying to kill me and I defend myself, but that is still entitlement to a life, isn't it? If you don't feel comfortable admitting to the same thing, then maybe there's a little work to do there so that you can recognize and steer those parts of yourself rather than letting them jump out and take the wheel later on.
What does this have to do with AI, Claire? It has everything to do with AI because as logical as our arguments on either side of this debate sound to us, We're focusing on other individual authors as the problem rather than keeping our own houses clean and free of prisoners in the basement. Productive discussion or even productive argument doesn't happen when we're only able to see where the other person is wrong and unable to see where we might also be wrong in the same kind of way.
So basically what's happening online that I see between the The problem with the super pro AI folks and the super anti AI folks is [00:20:00] that they're not actually having the same conversation, but because they're not listening, they don't realize that. So the anti AI folks by and large are saying, a massive violation has taken place, and I'm not okay with pretending it didn't happen.
I want to see restitution equal to the violation before I'm even willing to talk about the usefulness of the technology. And the response to that that I see from pro AI people tends to be snide comments about how the anti AI authors are Luddites. And when you realize they're not being Luddites, they just feel violated, it starts to seem kind of like a shitty thing to do to be smug about their inability to just get over it, right?
So maybe it's worth slowing down if you are using AI and you're very much pro AI and its possibilities. Maybe it's worth slowing down and saying, Hey. Yeah, let's have a conversation about how to get adequate reparation for the violation. And maybe [00:21:00] as we go through that slow process, we can also look ahead and have a conversation that I'd like to have and for the anti AI people.
What I hear coming from this group is unfortunately equating individual users of ai. Individual authors who use AI for their business to try and keep their small business paying the bills or because maybe they have a disability that the technology finally addresses. Equating them to the violators who committed the original sin.
But that doesn't really add up, does it? And we know this is a distraction tactic. We've seen this before. It's like when British Petroleum created the idea of a carbon footprint. You've probably heard of this concept, so I'll keep it brief. But basically, as BP was feeling the heat about their carbon emissions, they created the idea of a carbon footprint.
You could go onto their website and answer some questions about your lifestyle to figure out what your carbon footprint was. That is, how much pollution you were creating by being alive in a society [00:22:00] designed around fossil fuels. So this was a brilliant sleight of hand by BP, one of the top producers of greenhouse gases in the world, because it shifted the attention away from their culpability in contributing to climate change, and put the responsibility for fixing a global problem on each of us.
Most of whom, Our living paycheck to paycheck, and having to prioritize convenience and price over whatever the most eco friendly option is. The result of this little maneuver they pulled is culture wars between people about SUVs versus Teslas, using a drinking straw or not, xeriscaping or having a lawn.
And certainly we can take accountability for our part in much larger issues. But what I see happening with AI is the same trick BP and other fossil fuel companies pulled in many ways. We, authors, are turning on each other instead of looking toward the real source of the problem and using [00:23:00] collective action to get the accountability and maybe even the restitution we deserve.
The same goes for the legitimate complaint of how much water is required to cool the hardware needed for even the simplest chat GPT inquiry. You gotta admit that maybe asking the computer to summarize an email you're too tired to read might not be worth dumping out a full bottle of water, right? Or maybe you disagree, and that's fine.
And for those who are appalled by how some people seem okay using the models despite the information about the environmental impact, it may help to remember That we are all tired. So, so tired. And it's a big ask for our human brains to feel the weight of the environmental impact when we're just staring at a screen.
You don't have to know much about human psychology to know that the brain doesn't usually weigh out of sight disadvantages well with, uh, or against visible advantages. [00:24:00] And you can probably think of a time when the effort of, say, recycling something seemed like way too much, right? Maybe you had to wash it out, remove the label, maybe the recycling bin wasn't anywhere around, you're gonna have to take this thing home because these people don't recycle, right?
And so you just threw it in the trash. So again, let's grant others the same exceptions we occasionally grant ourselves. What I'm trying to invite everyone to hear is that authors who have a different position on AI than you need not be considered the enemy. And it's important to recognize that, because when our body perceives a bunch of enemies lurking around, our body lives at the razor's edge of a sympathetic nervous system response, or fight or flight.
And when we live like that, our creative thinking is smothered, and we get into burnout. So just as artists, we want to live like that as little as possible. [00:25:00] And one place to start is deconstructing our emotional patterns around this topic so that we can just have peace. We can just do what we need to do.
You can still take the stance you want to take for your business. I'm not telling you not to. But doing so doesn't require that you indicate to your body that there are people out there in this industry whose differing opinion means you harm. In this moment, as you listen to this, you are safe. If you're not making the money you want right now in your author career, I hate to be this gal, but it's not because of AI.
Not yet. Now that also seems important to own because it takes the power away from the big nebulous AI companies and puts some of it back in your hands. You can't scapegoat AI for all your ills yet. Maybe later, but not yet. I do see the scapegoating though. The scapegoating is starting to happen and it's worrisome.[00:26:00]
The problem with scapegoating is that it turns into a witch hunt soon enough. And witch hunts have never actually burned witches, right? So I've already seen this happening. I saw a claim just today that you can tell something is written by AI because it has a lot of em dashes.
Well, You might as well tie me to the stake and light my dress hem on fire because all of my books must be AI written then, even miraculously the ones from a decade ago.
There are a lot of authors who have mislabeled their action of persecuting others as defending the author community. These people have locked Persecutor in their basements so that they don't see that's what they're doing because it doesn't fit with their idealized self image. So I ask you listening to pause and consider the possibility that you might have slipped slightly into this pattern yourself.
Ask yourself if you have. If you've been trying [00:27:00] to root out authors who are using AI and expose them, then I can almost assure you that you have slipped into a persecutor pattern. And that's going to be hard to hear for some of you. And you're going to want to lob some harsh words, my way. If you follow that urge, you're missing out on an opportunity to see something important about yourself.
I mean, listen, I see this urge in myself. I'm a one, of course I have persecutor urges. And I feel that sense of entitlement to be judge, jury, and executioner. And I'm glad I recognize it so that I can integrate it and own it, but not act on it. I can stay a step ahead of its tricks.
Mostly. Not always, I'm sure, but mostly. And a final note, before I get into some fun Enneagram type specific stuff, is that most authors I talk to, most, are not on the extremes of this debate. But as usual, They are living under the [00:28:00] tyranny of the extremes, staying silent about their position because they don't want to be called a Luddite or a plagiarist.
And that's a feeling a lot of us can relate to lately, huh? Feeling like there is a small group at each extreme of a spectrum that threatens those who disagree with them with some sort of mock trial and public punishment. It's important that we ask how we're contributing to that as authors around the subject of AI.
It's also important to just remember that most authors I talk to, and I talk to a lot of them who trust me and are very open about their feelings on things like AI, most authors I talk to are somewhere in the middle. Right? So there are some AI technologies they feel comfortable using in certain ways, but not in others.
And there are certain AI technologies they don't feel comfortable using at all, and so they don't. And if anyone asks them about it, they won't answer, or they'll lie, which I think is smart, given the climate.
Let's break down this debate by Enneagram type to take an even [00:29:00] closer and less pleasant look at ourselves, shall we? We're already in Painesville for our ego with this episode, so we might as well keep going. All right, so let's start with the feeling triad and look at where the fear might be coming into play.
So 2 is the helpers. Where might your desire to be loved and appreciated or your fear of your help being rejected play into your attitudes on AI? 3 is the achievers. Where might your desire to be admired and seen as successful be blinding you to other important considerations in your attitudes about AI?
Four is the individualists. Where might your desire to be deeply seen and understood and your fear of being meaningless, where might those be playing into your attitudes on AI? Moving into the thinking triad. Five is the investigators. Where might your desire to be knowledgeable and competent and your fear of looking foolish be playing into your attitudes on AI?
[00:30:00] Six is the loyalists. Where might your desire to be safe and supported by others and your fear of being without guidance and support be playing into your attitudes on AI? Seven is the enthusiasts. Where might your desire for satisfaction and limitlessness, and your fear of deprivation and boredom, be playing into your attitudes on AI?
And finally, the action triad. Eight is the challengers. Where might your desire to feel powerful and invulnerable, and your fear of being harmed or controlled, be playing into your attitudes on AI? And nines, the peacemakers, where might your desire to feel connected and whole, and your fear of conflict and controversy be playing into your attitudes on AI?
And then ones, the reformers, where might your desire to be good and righteous and your fear of being bad or corrupted be playing into your attitudes on AI? [00:31:00] If you feel like your fears and desires aren't influencing your attitudes on AI. At least not in a way that shuts you off from other viewpoints.
Then think again. I hate to be that firm, but I also love it. Um, okay, so as I've talked about all through this episode, I haven't actually seen any other development in the industry in this time that I've been a part of it that stirs up more deep fears in people than AI has. So whether you're in touch with it or not, your core fear, and maybe a few others, are at play here.
If you haven't seen it, then you need to keep looking. So that's not to say that there is nothing to be genuinely concerned about when it comes to AI. Some of the stupidest people around, who think they're the smartest because they know how to code, have really opened Pandora's box with the recent technological developments.
And I'm not going to tell the people who are concerned about it [00:32:00] that it'll all be okay. Just like I refuse to tell people who are concerned about the rise of authoritarianism that that will be all okay. I doubt it'll all be okay. But we will have a better time assessing the reality. of the situation and addressing it appropriately, if we're able to stand beside our fear rather than drowning in it.
And listening to the perspective of others is the best way to keep from being hopelessly entrenched in our own position to our detriment. Because the truth is there are some excellent things happening with particular AI technology, especially in like the medical field. So AI is a tool. But maybe we can think of it as a wrench.
It's a tool that can be very useful in the right hands and can also become a murder weapon in the wrong hands. Our ability to relate to it from a place of reality rather than through the fog of fear or a haze of unearned optimism will determine the future of [00:33:00] humanity, most likely. And a lot of those turning point decisions are, unfortunately, outside of my control.
And yours. And most if not all of the people we know. But we still have some power over how we relate to it on an individual level, and how much space we give the fear in our minds. So if you're one of the many authors wondering, what if AI replaces me? I'll say this. It will only replace you if you let it.
If your sole purpose of writing has been to sell as many copies as possible, if you've lost touch with the innate benefits to the human mind, heart, and body of writing stories, then the fear of AI replacing you will have oxygen to grow.
But just as there are authors who have no desire to use AI to write, there are readers who see reading a book as more than just consuming words. They see it as an opportunity to communicate with [00:34:00] another human being through storytelling. And they will not feel satisfied with reading books that don't have Another human being on the other side.
So maybe human written books will become rare finds and therefore more valuable Maybe some authors will need to step up their craft To stand out with quality in the sea of AI generated crap and the swell of that sea is coming But as Dr. Malcolm so succinctly puts it in Jurassic Park Life finds a way AI is not alive.
You are. AI presents challenges to those who aren't interested in using it, but if you're waiting for your life to have no challenges or problems left in it, I've got bad news for you. There will always be a place for human written books.
I have no doubt about that. And maybe as you become more comfortable with AI, if you get some of the restitution you need to move past the violation of the original sin, you'll find yourself curious [00:35:00] about how the technology can help you tell your human stories. Or maybe you never get there. That's okay too.
But either way, I hope this episode leaves you with some new thoughts about what isn't in your control and what you can let go of trying to control as well as what is still in your control and where is a wise place to put your attention. My hope is that each person listening can find a little bit of humility about their position on AI that allows them to say, this is the right approach for me and my author business right now.
But I can see how someone else might arrive somewhere different, so I'm going to assume they know what's best for their situation, and stop viewing them as the enemy who must change their view or face punishment. This is simply the golden rule in action. If you don't want other authors to nitpick, criticize, or even ostracize you for your view on AI, don't do it.
Or anything else, the first step is to find a place inside of yourself where you don't feel [00:36:00] the need to do that to others. When you let that desire go, it's a huge relief, and suddenly you have a lot more attention and energy to best position your author career for the future of the industry. That's it for this week's episode of what if for authors, I'm Claire Taylor and I'm exhausted.
Thanks for joining me. I hope you'll tune in again for the next episode. Happy writing.