Okay so the second part is a lie—I’ve never really been worried about artificial intelligence. I have, however, watched narrators and authors worry, and illustrators absolutely lose their shit.
Are there ethical concerns? Absolutely, though I believe generative AI can be used ethically. That’s not what this article is about, though.
This is about accessibility. It’s also about how change won’t hold you back—fear of using it will.
But first, let me tell you a bit about my son. Like me, he’s autistic. Unlike me, he did terribly in school. I won’t enumerate our struggles with IEPs and accommodations here, save to say that by the time we got him in a more accommodating program at age nine, the PTSD was already strong. He has a reading disability—decoding is difficult and large blocks of text are intimidating, so he lacks confidence, even though he can read a lot better than before.
Writing is also an issue. He lacks confidence in spelling and doesn’t want to write anything incorrectly. Unlike many typical children, he never used inventive spelling (a period where kids spell things phonetically). He also has anxiety about putting his thoughts into words.
Asking him to read or write will cause high anxiety and avoidance. If someone persists, it will lead to a meltdown. Earlier this year, his sister wanted to watch The Bad Guys. My son refused to join us. Not only that, he was angry. He wanted to watch something with us, but not that. He got really upset and ended up crying in his room. I asked him about it later and he said they tried to make him read The Bad Guys books at school. For him, the series was tainted.
He does enjoy books. We have a subscription to Epic! which has a great audiobook collection, as well as comics with a “Read to Me” option. He adores the Cat Ninja series and falls asleep listening to the audiobook of Diary of an 8-Bit Warrior. He also enjoys telling stories. He’s been working on his own for a few years, though it’s illustrations-only despite my urging to create captions. The creative spark has always been there.
Even with accommodations, he was miserable in a typical school environment. He told me he felt like he was “born to suffer” and that he’d rather “jump off a mountain than go back to school.” He had mystery aches and pains and refused to do any work at school. It felt like we were prioritizing education over his mental health, except we weren’t even doing that because he wasn’t learning. And so earlier this year we decided to switch to homeschooling, sunken cost be damned.
We’d tried homeschooling once before during the pandemic. It was a shitshow. I have a teaching background and the urge to have things structured was overwhelming. Even though we are a very tech friendly household, and I used a lot of educational apps in my homeschooling, it was still very difficult for him. Anything that’s formalized or reeks of school was met with staunch refusal.
I knew we’d have to approach things differently this time. He learns best when things are unstructured and casual. We have some of our best conversations during movie night. Outschool class about making your own Roblox game, something he’s absolutely dying to learn? No, thank you. It’s got the word “school” right there.
On a whim, I decided to show him ChatGPT. I already use it myself, and it’s a fantastic tool. Need a brainstorm buddy? ChatGPT is great for that. I’ve given it character details and asked it how the person would react. I’ve used it to search for things more intelligently and come up with chapter titles.
I asked him to give me a prompt. Together we came up with, “Tell me a story about how a zombie became a god.” It wrote a brief story about a zombie named Zed who became a god and ruled benevolently over humans.
My son was thoroughly engaged, giving me several more prompts like “What if Zed decided to betray the humans?” and “What if Zed then became something beyond a god?” and “What if Zed had to fight a being with infinite power, something beyond imagination, that can turn into anything, any form, or end the multiverse with a single snap? If the being touches anything, the thing would get corrupted into one of his minions and experience endless torment.”
My son wanted Zed to become more and more powerful, fighting increasingly more powerful things, which led to ChatGPT basically repeating itself because power creep sucks. So I asked the kid for something different.
Suddenly the story took a new turn—Zed was helping a survivor, and they were on their way to a fortress of survivors. When we had to take a break he was bummed. He wanted to keep going! I told him to use voice-to-text and DM me his ideas. We could add them in tomorrow.
And he did! I got a bunch of messages, which I cut and pasted into Word. The following day I walked him through how to add capitalization and punctuation. From then on, my son was crafting his own story. He paced the room, dictating while I typed for him. Suddenly the story he’d been illustrating for years had text to go with it.
Sometimes he got stuck, at which point we’d ask ChatGPT to write us a scene, then make corrections.
The first day we went entirely without ChatGPT he got alarmed.
“But we didn’t use AI today!”
“We don’t have to. You can write it all if you want.”
“But I’m no good at dialogue!”
“That’s okay, I am.”
We talked about plot (he didn’t have much to start, just nonstop fight sequences), how he needed to give the characters goals, and how the book needed to end with them succeeding or failing to reach those goals. He shared his plans for the whole story. There are a million different mutated monsters he wants to incorporate and several major plot turns. I told him it would probably be better as a series, and we decided on a decent end point. The last few chapters are even more action-heavy, if that’s possible, and we talked about reminding readers what the characters are working towards.
We now have a 16,000 word rough draft. This week we’re revising, and he’s already tossed out a bunch of the AI created text in favor of his own (Zed can now summon weaponized French fries). When we’re done I’ll format it and add his illustrations. He’s super excited to print out copies and give them to everyone he knows, including his old teachers, though he’s too nervous to make it publicly available at this time. I showed him how even bestselling books have bad reviews and we talked about how no book is for everyone and that’s okay. (If you are interested in reading a copy, please let me know in the comments. I think it would do wonders for his confidence to know people want to read his book.)
When new technology emerges, society has a tendency toward catastrophizing. I remember when the Final Fantasy movie came out in 2001. People were amazed at the quality of the CGI lead, featured on the cover of Maxim, and worried the technology would make actors obsolete:
Among some actors there is concern that these “synthespians” as they are called, will eventually replace humans. …nobody doubts that producers would employ compliant computer-generated actors who require no salary, and hardly any upkeep, if they thought they could get away with it.
Tom Hanks is concerned that technology will enable unscrupulous auteurs illegally to use a computer generated image of himself – or use a digital clone to tamper with his existing performances.
He told the New York Times this week that he was troubled by it. He said: “It’s going to happen. And I’m not sure what actors can do about it.”
In defence, supporters of computer-generated human characters say they are just tools that add to the film-makers palette and that actors have no need to fear. (x)
That didn’t happen. Now it’s common to use motion capture combined with outstanding performances. Even when there is no motion capture, actors are still employed to provide voices. Computer animation is a tool, and so is AI.
Already there are visual artists making use of Stable Diffusion in their own work. I’m seeing people train the AI on their own content so they can create work faster. Others are using AI to flesh out their sketches. One guy even created a script that allows SD to draw along with him. Another created a helpful visual guide on how artists can incorporate AI into their work:
Many artists are worried they’ll lose business. Certainly many authors are now looking to AI to generate book covers—but so are many cover artists. Most book covers are made by combining or tweaking various stock art. This, combined with genre conventions, leads to some stock images being used repeatedly on endless book covers. It’s particularly striking in fantasy, where pickings are especially slim.
Another issue with stock art is diversity—images overwhelmingly feature attractive white people. If you want a Black elf or a fat middle-aged woman, you’re usually hard pressed to find one. It’s the reason why Paranormal Women’s Fiction tends to go with symbolic covers.
AI solves this problem. I’ve seen a ton of gorgeous BIPOC characters being generated with AI. People are using it to create art that just isn’t available anywhere else.
While it’s definitely easier to get quality images with AI, it doesn’t mean there’s no work involved. Anyone who’s used AI knows it can take countless refining of prompts, blending of multiple images, and back and forth with photoshop to get what you’re looking for. As Joanna Penn likes to point out, the “ease of use” argument was also used when photography was created. After all, someone can just point and click, right? You’d think this argument was settled, but a quick search will show you that many people still argue that photography isn’t art.
AI writing isn’t nearly as advanced as AI illustration yet, so I haven’t seen as many authors getting anxious about it, but that day will come, and soon. It’s important to remember that AI is just a tool (and a helpful one at that). Savvy authors are already incorporating AI into their writing process—using it to brainstorm ideas, write ad copy, or sketch out that one type of scene they suck at writing. It allowed my son to get over that first hurdle—the blank page—and will help others do the same.
During our writing process, my dad came to visit and I told him what we were doing. I’d barely gotten the word “AI” out before he interrupted to tell me that AI was aggressive and rude. I asked him to elaborate, because ChatGPT has always been perfectly polite with me. Turns out he’d read an article in which someone asked ChatGPT the date and when it responded, tried to “correct” the AI by saying it was a different date. The user then belligerently insisted it was the wrong date until Chat GPT acted annoyed and told the user to stop or go away.
I don’t know how accurate his recounting was, but that’s how word of mouth works—it’s a big game of telephone and at the end of the day it doesn’t matter what the article said, it only matters what people remember, and what my dad remembered was that AI was rude. It wasn’t until I showed him several of my conversations with the AI and explained how I used it that he began to soften. That’s part of the problem—many of the AI naysayers haven’t tried it, refuse to try it, or specifically use it in ways designed to create a negative response, likely for clickbait.
AI is here to stay and nothing will change that. If you don’t learn to use it, you’ll be left in the dust by those who do. While it may shut doors for some, it will open countless more for people who lacked the time, money, or skill to realize their creative potential. If you haven’t already used ChatGPT, I recommend giving it a try in good faith. You might be surprised at how helpful it can be.