7 top misconceptions about ChatGPT and writing

Reading Time: 10 minutes

There are a lot of myths and misunderstandings about what ChatGPT can do and how it works.

People tend to swing between extremes — it’s so good that it will destroy the world, or at least, the writing profession as we know it. Or it’s so bad that it will destroy the world by flooding it with unreadable AI-generated spam. It’s illegal and immoral and will destroy our souls. It’s just a fancy chatbot, nothing new to see here, we can just ignore it.

After we published our statement that we accept AI-assisted fiction, I’ve been hearing some of these myths from readers, writers, and in online discussions.

(Image by Maria Korolov via Midjourney.)

Well, I’ve been using ChatGPT and similar programs extensively over the last few months, and have figured out how they work and how they don’t work. Also, I cover AI at my day job, so this is kind of my wheelhouse.

I’m not an AI developer, though I did code relational databases for a living once upon a time, but I have been a technology journalist and editor for more than two decades. I covered the dot-com boom, and have seen my fair share of other tech cycles come and go.

So let’s address these myths one by one.

1. ChatGPT can create a whole story or book for you

When people first try out ChatGPT, and ask it to write them a story, and ChatGPT spits out a story, they might think that ChatGPT makes writing too easy. All you have to do is give it a prompt, and you get a story. No work required.

The horror. It’s a complete travesty of the writing process. It’s an attack on people’s very souls, on the very idea of creativity.

This is the top reason many editors — and many writers, too — have a knee-jerk reaction and reject all AI-assisted writing.

And yes, ChatGPT does create a story. But it doesn’t create the story you want, not unless you put in quite a bit of time and effort.

The AI image generators are the same way. Yes, you get a picture out when you give a prompt. That picture might be good enough for 90 percent of use cases. But it rarely gives you the exact picture that you want. If you want to have control over the output, you’ll have to spend quite a bit of time learning how to use the system.

Generative AI can give you some story. But not the story that you want. It can give you a picture. But not exactly the picture you want.

It’s not a magic wand.

What it can do is create a particular base level of quality for stories. Nobody should be submitting something that’s completely unreadable and ungrammatical anymore. Whether you’re a non-native English speakers, or dyslexic, or just struggle hard with putting words into sentences for whatever reason, AI assistants can generate a minimum passable draft.

Remember when music synthesizers came out? Suddenly, anyone could be a musician. Hit a button, and get a melody, or a perfect beat, without having to practice for years first. But just because someone could now create something by hitting a button, didn’t mean that the thing they were creating was good. Rap is a highly competitive industry. It still takes skill to be a good rapper — just a different kind of skill than what you needed before.

2. ChatGPT is bad and AI can’t generate anything good

The next mistake people make, after they’ve played around with it a little bit, is that after they realize that the AI isn’t giving them what they want, they assume that it’s impossible for the AI to give them what they want.

The AI can’t be creative, they say. Or it’s a lousy writer. Or it doesn’t have the feel of real human text.

Here, an old principles of computing applies: Garbage in, garbage out.

If you give it a bad prompt, you get out a bad result.

In fact, it takes quite a bit of effort to get ChatGPT to produce a high level of writing. You need to specify a lot details about characters, motivations, themes, plot point, descriptive elements, tone, writing style, tense, formatting, and a lot more.

Your prompt may wind up as long as the final story, or even longer. And the prompt is just the beginning. Once you get the first draft of the story, you’ll need to go back and ask ChatGPT to revise, to adapt, to rewrite, to restructure.

The entire process can take as long, or longer, than actually writing the story yourself. And it takes quite a bit of skill and experience.

AI is dumb, but it’s not a total idiot. You just have to use it correctly.

3. ChatGPT is so hard to use that it’s useless

People who get this far might decide to just throw in the towel. What’s the point of using an AI to write when it’s faster to write it yourself?

They’re throwing the baby out with the bathwater.

First of all, learning how to interact with AI is a core skill of the future. You might as well start now and get a leg up on it.

Second, while ChatGPT might not be able to write a whole novel for you, it can do lots of small, useful things very well.

For example, it can help you organize your thoughts, structure your book outline, suggest character names, or provide random prompts to help you come up with new ideas and directions for your story to take.

Finally, ChatGPT by itself is not a novel-writing tool. It won’t keep track of your characters for you and it can’t write more than 700 words or so at a time. And it’s not an editing tool. You’ll have to cut-and-paste from ChatGPT into a word processor.

Professional novelists use tools specifically created for novel writing. We’ll be doing reviews of these tools as we go along. But Sudowrite is one. Jasper AI is another. There’s also Novel AI. Verb AI is a newcomer to the space. Expect to see a lot more tools popping up soon, and existing novel-writing platforms will probably be adding ChatGPT-powered functionality in the near future, if they haven’t already.

Oh, and ChatGPT is excellent at marketing communications. Ask it to generate press releases, social media posts, ideas for illustrations, author bios and other website content, book blurbs. All the stuff you know you should be doing but can’t get up the energy for, ChatGPT can step in and help out. Again, you’ll have to review its output for accuracy and tone, but having it create the first draft for you can really speed up your marketing efforts.

4. AI-generated content is easy to spot

Bad AI-generated content is easy to spot. Good AI-generated content is impossible to distinguish from human writing.

Let me repeat this again — commercial authors who are already using AI to help write their novels can’t go back and distinguish the passages they wrote from those created by the AI. If they can’t do it, why in the world would you think you can?

Yes, there are AI detectors out there, but they are laughably bad.

OpenAI itself, the maker of ChatGPT, has released an AI detector — and it only detects AI-generated text 26 percent of the time, the company itself admitted. Worse yet, when looking at completely human-written text, it classifies it as AI-generated 9 percent of the time.

Most detectors work by looking at the distribution of particular types of words, or the length of sentences, or the complexity of ideas. AI-generated text tends to be a lot more even in tone than human-written text. Humans vary their sentence length and degree of complexity. These detectors are easy to fool. Just adding “write sentences of different lengths” is often enough. But even if the detectors got better, there’s no way to know whether a human didn’t use the AI to write a rough draft, then rewrote it in their own style. Or just changed a few words enough to throw off the detection algorithm.

Google itself has thrown in the towel. Earlier this month, the company said that it will not automatically classify AI-generated content as spam. Instead, it will look at the content of the article and how useful it is.

So yes, you can tell bad AI-generated content from good human writing. But you can’t tell good AI-generated content from human writing.

5. ChatGPT is bad because it’s just cutting-and-pasting from the web

Yes, ChatGPT was trained on web content. And on books. And on code. And a bunch of other training data sets.

During the training process, the AI drew inferences from the training data. It found correlations. It made deductions. And then it forgot all the training data.

It’s not a fancy search engine. It’s a collection of inferences and correlations, of all the knowledge it gained from reading billions of pieces of content. It doesn’t have access to the content itself.

If the content that it generates sounds similar to what’s out there already, it’s because it has seen so many examples of it that it can write it from scratch. If it tells you a particular fact, it’s because it learned the meaning behind the fact, not because it’s parroting words its heard before.

It’s the same reason why AI-generated images sometimes have watermarks on them. It’s not because the image generators are cutting-and-pasting from those watermarked images. No, it’s because they’ve learned that images on the Web sometimes have something that looks like a watermark on them, they think it’s part of the image, so they try to create their own version of a watermark.

ChatGPT writes content from scratch. If it happens to coincide with something already out there, it’s purely by chance.

It’s the same way that a human would write something from scratch that is inspired by all the books and articles they’ve read before.

Does that mean that it’s okay for AI companies to get their training data from proprietary data sets and copyrighted works? I don’t know. The courts will be deciding that — there are several ongoing lawsuits to this effect.

6. ChatGPT is just autocomplete on steroids

Technically, ChatGPT is autocomplete. It analyses the conversation so far and predicts what the next word would be.

But that isn’t a particularly useful or informative description. Nor more so than describing human brains as a collection of cells that generate chemical and electrical signals.

The sum is very much greater than its parts.

In fact, researchers have already identified 137 different emergent properties of large language models like ChatGPT. Apparently, once the AI models get big enough, they start to do things they shouldn’t be able to do.

For example, if you tell it to pretend to be a pirate, and write a poem about elephants using smartphones on the moon, it will do it. Even though that request doesn’t exist out there on the web anywhere, so it can’t be drawing a correlation based on existing content. Or you could give a piece of original text you wrote and ask it to analyze it. Again, this text has never been seen before. But it will still be able to give you useful analysis.

If you ask it to think through something step by step before giving you an answer, it will give you different — and better — results than if you just asked the question directly.

Something weird is happening there.

Here’s what these emergent phenomena look like when plotted out:

(Image courtesy Jason Wei.)

As you can see in the charts, the improvements in performance are barely noticeable at first. In fact, some experts I talked to a couple of years ago said that our current AI technology would never get us to the point where AIs had common sense. The capabilities just aren’t there, they said. It would be like taking the technology we used to build skyscrapers and expecting it to get us to the moon.

Well, we’re heading for the moon now.

We’ve hit a point at which the AI models have stopped crawling and are now learning to fly.

It’s not strictly speaking accurate to describe what ChatGPT does as “thinking.” But, for all intents and purposes, that’s what it feels like.

7. Generative AI is illegal and immoral and we shouldn’t be using it

Generative AI is not illegal — not yet, anyway. The courts have yet to weigh in on what use of training data is appropriate and what isn’t.

In the end, we’ll probably see some guidelines established. AI companies will probably wind up having to ask permission from content owners, or pay for the rights to use the work for training. And they might pay artists and writers if their names are mentioned in the prompts.

In fact, Shutterstock is already doing this with their generative AI tool.

Expect to see more companies do something similar in order to stay ahead of the courts on this issue. Plus, it helps corporate customers feel more comfortable in using AI-powered services.

Worse case scenario, generative AI will get a little bit more expensive as companies pay more for training data. Some AI companies might limit their training data sets to public domain content, which will slightly lower the quality of the results — but the continuing improvements in the AI models will probably offset that.

(Image by Maria Korolov via Midjourney.)

What about the immoral part?

People are saying that using generative AI takes jobs away from working artists and writers.

This is true, to some extent.

For example, by replacing the free Pixabay illustrations we used to use to illustrate these blog posts with ones generated by AI, there’s a digital artist somewhere whose work would have gotten some exposure who now isn’t getting it. On the other hand, Shutterstock will pay artists when it generates art in their signature styles. And millions of people who previously didn’t have any access at all to custom art will now have it.

And, by following this same logic, you should stop using your phone to take pictures of people because it takes work away from portrait painters. And stop using your email because it takes work away from bike messengers. And stop making microwave meals because it takes work away from restaurants.

Technology will always replace human jobs. It’s been replacing human jobs through all of history — including those of writers and artists.

And it might — theoretically — result in widespread unemployment. But it’s more likely to result in shifts in employment, instead. You’d think that increased productivity would be correlated with job losses. But unemployment is at near-record lows. We keep finding new things for people to do.

Is AI so completely different from all previous technological innovations that it will change this dynamic? Some experts think so. Many are arguing that we should have universal basic income, to help reduce the impact of the mass unemployment that will result.

I personally think that universal basic income is a good idea no matter what happens with AI. We already have it for retirees and trust fund babies — why not extend it to everyone else who wants it? Sure, some people abuse it. They waste their time partying, doing drugs, golfing, and playing mahjong. But others volunteer, run charities, take classes, become painters or writers, or start companies.

But regardless of what happens with universal basic income, I don’t think that the rise of AI will lead to massive unemployment. After all, back when we were all subsistence farmers, who could have thought that there would be so many schools and hospitals and government offices for people to work at? Who could have foreseen the emergence of the sports industry, of Hollywood, of the computer industry, of the space industry? All of those have been made possible due to the advent of industrialization.

There are plenty of downsides to technological progress, including painful — but temporary — disruptions in the labor force.

But I, personally, don’t think that writing jobs will disappear. I think they will evolve, and the new tools we’ll now have available will make new types of creativity possible that we couldn’t even have imagined before.

MetaStellar editor and publisher Maria Korolov is a science fiction novelist, writing stories set in a future virtual world. And, during the day, she is an award-winning freelance technology journalist who covers artificial intelligence, cybersecurity and enterprise virtual reality. See her Amazon author page here and follow her on Twitter, Facebook, or LinkedIn, and check out her latest videos on the Maria Korolov YouTube channel. Email her at [email protected]. She is also the editor and publisher of Hypergrid Business, one of the top global sites covering virtual reality.

Leave a Comment

Your email address will not be published. Required fields are marked *