77,752: The End

I finished the first draft of Gulf City Blues moments ago. I need to take some time to reflect on the process and so on, and I will certainly do that. Right now, I’m basking in the satisfaction of completion.

I’m especially happy that I wrote a solid ending. That’s always a challenge. I tend to describe my endings as “crash landings.” Early drafts are the worst, but I have declared project “done” even though I wasn’t happy with an ending that feels abrupt and lacking something. To finish a first draft and think, “Yeah, that has legs” feels so good.

I always take time off between drafts. At least two weeks, but preferably a month. I like to empty my mind of the whole story before I come back to it. Attention residue is proportional to the size of the project, and two weeks usually isn’t enough to stop thinking and rethinking the plot.

I had planned to use the time between to revisit world building for the novel. But I’ve changed my mind. I might waste my time on things that aren’t necessary for this story, and keeping my head in Gulf City will make it hard for me to forget the story I told. It’s time to let everything about Gulf City lay fallow in my mind while I attend to other things.

How I’m using ChatGPT, and how you can use it, too

Yesterday, I wrote about using ChatGPT as a sounding board to develop writing topics. I used this prompt:

Act as a conversation partner as I think through a topic. Your goal is to help me explore the topic and clarifying my thinking for something I might write about. Ask me one question at a time. After I respond, comment on what you think I mean and ask me if I want another question. Ask me for a topic.

Today, I’ll break down why I structured the prompt the way I did. If you’ve been wondering how to get started with ChatGPT, or tried it and gotten lackluster results, I hope seeing how I use it will help you find a good use for it.

General Principles

Before I dig into the individual components of the prompt, it’s important to understand a few general principles of working with ChatGPT.

First, be specific. The prompt above is focused on a single objective: to explore a topic that I provide. If I had been struggling to identify a topic, I’d have created a different prompt.

Second, give the tool a clear task. In my prompt, I tell the tool what its objective should be. One challenge people have when using ChatGPT is that they aren’t clear about the output they’re looking for and get results they can’t use.

Finally, remember that the tool is only as good as the data it has been trained on, and a lot of that data isn’t very good! ChatGPT doesn’t “know” things; it can only generate text that probably addresses the question it is being asked. It isn’t a search engine, and if you aren’t careful with what you ask it, it may generate bullshit answers. I once asked it what it knew about me as a writer of roleplaying game material. It constructed an elaborate bibliography of award-winning Dungeons & Dragons publications. Not only did I not write them, but none of them existed. So you have to be responsible for checking its output and making sure it’s good.

Prompt Patterns

User input to a Large Language Model (LLM) such as ChatGPT is called a prompt. Learning to create effective prompts is called prompt engineering. That subject is beyond the scope of this post, not to mention beyond my current ability to explain. What’s important is that part of prompt engineering is learning to use conversational shortcuts known as “prompt patterns.”

Prompt engineers have identified many prompt patterns and continue to identify new ones. I know of almost two dozen. But don’t worry! You only need a few to get started, and I’ll explain some of them in this post.

Prompt patterns can be used individually. They can also be combined to create more robust interactions. In the prompt I identified above, there are four: Persona, Flipped Interaction, Cognitive Verifier, and Tail Generation.

Persona

“Act as a conversation partner…”

In this phrase, I’m instructing ChatGPT to respond as if it is a particular kind of person. The Persona pattern restricts the output to what someone with the named background or skillset would know and how they would respond. Here, I want ChatGPT not to answer questions for me but to engage in back-and-forth with me as if it were a human and we were chatting over a cup of coffee.

Here are some other examples of using the Persona Pattern:

  • Responding as a dental technician, tell me what questions I should ask at my next cleaning about how I can improve my gum health.
  • I am going to have my kitchen remodeled. Acting as an architect, tell me factors to consider that I might not know about.
  • As a nutritionist, tell me what considerations I’ll need to make if I shift to a vegan diet.

In each prompt, I’m providing perspective to the tool to guide its output.

Flipped Interaction

“Your goal is to help me explore the topic and clarifying my thinking for something I might write about. Ask me one question at a time.”

This phrase shows a kind of prompt pattern called “flipped interaction.” The idea is to have ChatGPT ask you questions, rather than the other way around. It’s a great technique when you aren’t sure exactly what you want to ask and need to dial in your topic.

In this case, I’ve implied the flipped interaction rather than explicitly asked for it. ChatGPT understood my request probably due to the prompt being part of a larger conversation where I’d used the pattern already. You may find that you have to be more explicit in your instructions. For example, when I was needed to flesh out the fictional city for my current novel, Gulf City Blues, I use this prompt to start:

I am writing about a fictional city on the southern Gulf Coast of Florida. I don’t know what world-building factors to consider. Ask me questions about the kind of setting I want until you have enough information to make recommendations. Ask me one question at a time. Ask me the first question.

Those last two instructions aren’t always necessary, but I rarely omit them. Without the first of them, the tool usually spits out a long list of questions. That can be helpful if I want to see what its train of thought is but usually it’s a distraction. Without the final statement, sometimes the tool responds with enthusiasm that it would love to help me and then waits for me to nudge it again to start.

Try flipped interaction when you want ChatGPT to question you about a topic rather than questioning it.

Cognitive Verifier

“After I respond, comment on what you think I mean…”

This statement is a form of the Cognitive Verifier pattern. The intention is to help the LLM understand what you’re really looking for instead of answering the surface question with the easiest possible match. It’s sort of like talking to a trusted advisor who is willing to dig into your problem rather than give you a quick answer. In my case, if ChatGPT’s response had been off topic, it would allow me to rephrase my response.

Another way to phrase a prompt using Cognitive Verifier is:

Whenever I ask you a question, generate a number of additional questions that would help you generate a more accurate response.

Tail generation

“… and ask me if I want another question.”

Tail generation is a prompt pattern to remind ChatGPT of what you’re trying to do. That’s especially useful for long conversations, because ChatGPT starts to forget what you’re talking about after a while. By telling ChatGPT to ask me if I want another question, I’m making sure it will keep going until I’m satisfied with the output.

Other Prompt patterns

As I said above, there are dozens of prompt patterns, and data scientists continue to discover new ones. Some don’t have much use to me, but there are others that I use regularly. “Outline expander” helps you build and flesh out an outline. I’ve used it when I’m crafting new workshops. “Question refinement” instructs the LLM to suggest a better version of the question you’re asking, which often results in more useful output.

I encourage you to try the ones I’ve outlined here. Try them individually and in combinations. See what works for you and what doesn’t. Then explore other patterns. Generative AI isn’t a fad, although the hype surrounding it often makes it seem that way. It’s not going away. It’s a powerful tool waiting for you to learn to use it.

Sisyphus with a pen

Yesterday, I realized that I hadn’t written a blog post since Monday. I should write something, I thought, and within minutes of putting pen to paper, I ran into a familiar challenge. Although I had a topic in mind, it was too broad. The more I wrote, the more subtopics popped up. It was like playing whack-a-mole, but the moles were hydra heads. Knock one down and two more take its place. Worse, I’ll realize that I need to go back and expand on an idea even more.

This phenomenon happens when I write just about anything. I’ll try to dissect a broad idea into manageable pieces. Each piece reveals more, often interconnected ideas. The enormity of the task overwhelms me, and I often give up.

I’ve experimented with various techniques for generating focus. Handwriting on paper forces me to slow down but slowing down only helps a little. Bullet-point outlines seem like the answer until I start writing from one and realize that I’ve missed something. Mind mapping ought to help but only produces its own chaotic web of thought. The only benefit seems to be that I get frustrated and give up faster, which saves time. “It’s a great way to visualize your topic,” I’ve been told. For me, it’s a great way to visualize my inability to focus. It’s discouraging, to say the least.

The best thing is to talk through the topic with someone, then write the ideas down as quickly as I can afterward. That’s great if someone happens to have the time and inclination to indulge me. That’s not always the case and it’s not reasonable for me to expect people to be a sounding board for me at all times. I’d need an entourage, but how could I afford to feed them all? 

I do have an OpenAI account and a subscription to ChatGPT, though.

I disdain the use of Large Language Model tools to replace human writing. Using it as an aid to writing is different, and ChatGPT excels at being a sounding board. When this very post threatened to get out of hand, I decided to give the chatbot a try. Here’s the prompt I started with:

Act as a conversation partner as I think through a topic. Your goal is to help me explore the topic and clarify my thinking for something I might write about. Ask me one question at a time. After I respond, comment on what you think I mean and ask me if I want another question. Ask me for a topic.

The result was a series of exchanges in which the tool helped me sort through all the things I might want to cover. After eight questions, I had a clearer grasp of what I wanted to say and started writing this post. I don’t know why I have so much trouble focusing. One possibility is… (BAD SAM, LEAVE IT!) … a topic I’ll have to explore another time. Meanwhile, I’m glad I’ve discovered a new tool to help me think and write with clarity.

Dropping out of warp

Last weekend, I joined the members of my critique group for a writing retreat. Over four days, I logged thirty hours of writing and added 9,000 words to my manuscript. It’s amazing what you can do when you’re in a house with three other writers and no one wants to break anyone else’s focus. Coming home to a normal writing schedule of about two hours a day feels like dropping out of warp speed.

I’ve begun every session of this project by writing, “My primary objective is to write an enjoyable PI story.” Every prior attempt to write a novel has been haunted by the ghosts of my graduate studies in literature. No matter what I wrote, I felt I ought to be writing something with deep significance. I never could live up to the ideal, and I berated myself for it. Filled with despair and self-loathing, I’d shift the story toward a didactic theme. Characters turned into mannequins and plots turned into lectures. I’d hate every minute of it, veer back toward more adventure-style fare, and begin the cycle anew.

With this story, I wanted to remind myself every day to focus on writing a story people would enjoy. “Social significance” would have to emerge—if it emerged—on its own. The mantra did help in that regard. Police corruption is an integral part of the plot, but the story is not a lecture about how All Right-Thinking People Must Stand For Justice. I’ve built characters with interesting motives, flaws, and strengths and let them interact.

But this weekend, I realized a curious thing had happened over the course of writing the first draft. My objective shifted from “write an enjoyable PI story” to “write 80,000 words.” The shift translated at first to setting a punishing daily goal. Even after I moved the target date to the end of January, I still focused on the target total. I wrote a lot of material not to serve the story but to pile up word count. If my focus really had been on writing an enjoyable story, I’d have stopped, re-examined my plans and the state of the story, and adjusted what I was writing.

It’s funny because I chose one word in my objective deliberately so I wouldn’t worry about length: “story.” I didn’t say I wanted to write “an enjoyable PI novel.” I chose the word “story” because I wanted to leave it open to finding the right length. I wasn’t sure whether it should be a short story, a novella, a novel, or a series. But once I recognized it was going to be a novel, I focused on the target length and sacrificed good storytelling.

Today, there’s not much story left. Mark knows who the killer is and needs only one crucial piece of evidence to prove it. There’s one subplot to wrap up—will he reconcile with his ex-girlfriend? (I don’t know yet.) Two to four scenes will take care of plot and subplot. Regardless of the final length of the draft, I will type “The End” once I write them. If the book is too short, well, there’s another draft after this one, and I can worry about it then.

2024: The Year I Don’t Write a Novel

Every year, I tell myself that this will be the year I write a great novel. It never quite works out that way. The truth is that I don’t so much “finish” a novel as “get bored with and abandon” novels. I am always looking forward to the next one. The one that’s going to be so great, not like the puddle of puke I’m working on now. The next novel will flow from my pen like liquid chocolate, rich and delightful. It will not take much revision because I’ll do it right this time.

Of course, that never happens because first drafts are never like that.

And then my mind is looking forward to the next thing, instead of working to refine and improve what’s right in front of me. It’s as if, having arrived at an oasis in the desert, I go chasing the mirage of a bigger, better oasis that might not exist.

This year, I am not resolving to write a new novel. I’d like to finish Gulf City Blues without rushing it. I’d like to set it aside for a month and not start the next thing, so that I can return to Mark Marshal and his problems without having another story gnawing at me. I don’t know what I’ll do with the down time. I might take that time for related research, or to develop elements of the setting that I know need attention. But I won’t start a different project.

When I come back to the story for revision, I won’t rush that, either. A few pages a day is plenty. If it takes months, then it takes months. Maybe it will take the rest of the year. Maybe it will take longer. I don’t care. I can’t rush through stories anymore only to abandon them before they get good.

Does ChatGPT dream of electric lawyers?

Ken White, AKA “Popehat,” posted a screen shot on Bluesky last night of a ChatGPT conversation.

His prompt: What does Ken White think about RICO?

ChatGPT’s response:

“As of my last knowledge update in January 2022, Ken White… has not expressed a singular or uniform opinion…” followed by some general summary of varying opinions about RICO, and finishing with “To get the most accurate and up-to-date information on Ken White’s views on RICO…I recommend checking his recent writings…”

White’s comment was, “Still some bugs to work out.”

A screenshot of Ken White's Bluesky post, He has posted a screenshot of the conversation summarized above, with the comment, "Still some bugs to work out."

White is a lawyer with a very public and long record of commentary on RICO. It’s not surprising that he’d view the response as buggy. I had a similar reaction when I first experimented with ChatGPT. I asked it what it knew about me as a writer. It provided a list of my published works. None were real. I thought, how useless is this thing if it just makes things up?

That understandable reaction highlights a few common, interconnected misconceptions about ChatGPT:

  • That it is a search engine,
  • With access to all information (or at least on the Internet),
  • Therefore, its so-called hallucinations (AKA “making stuff up”) proves that it doesn’t work.

None of those misconceptions are true.

ChatGPT is not a search engine. It is not a tool for retrieving information, although sometimes it will work for that purpose. You may have seen an explanation that it generates its text via probabilities of words occurring near each other. There is much more to it, but that explanation is good enough for our purposes right now. When you prompt ChatGPT, it doesn’t retrieve information. It determines the most likely first word or phrase in a response, then the next word or phrase, and so on.

The quality of that output depends on the quality of its training data. The training data shapes its responses. Getting 100% reliable information is never going to happen because its training data isn’t 100% true.

OpenAI has been coy about the exact content of the training data, so there’s no way to know for sure what the tool has digested. But we do know that fiction is included. That’s thanks to some Google researchers who tricked it into divulging some of its training data. They found blocks of published novels in the output. So fiction can creep into responses.

Beyond that, some of its data consists of false information. When I ask it questions about Scrum, its responses draw on the Scrum Guide. They also draw on articles written by people who didn’t know what they were talking about. The Guide says that a Sprint is no more than one month. A lot of people have written that it is “two to four weeks.” ChatGPT doesn’t know that the latter statement is false, only that it has seen it made frequently.

Garbage In, Garbage Out

Furthermore, ChatGPT hasn’t been trained on all data, everywhere. It hasn’t even been trained on everything that’s available on the Internet. To return to Ken White’s example—was ChatGPT trained on all of his writing? On some? On none? We can only guess. Given the response, it’s probably a very small amount of “some.”

ChatGPT responses remind me of an extemporaneous speaking competition I participated in during high school. You’d be given a random object (one was an internal part of a vacuum cleaner) and a few minutes to gather your thoughts. Then you had to give a five-minute speech about some doodad you probably knew nothing about. The contest was judged on style, presence, and audience engagement. If the criteria had been, “Did the presenter give me accurate/useful information about the object,” no one could have won.

You can elicit valuable output from ChatGPT. It requires going beyond information retrieval and learning something about the art of prompt engineering. But that’s a topic for another time.

ChatGPT has limitations and drawbacks, and there are serious ethical considerations about what its training data contains. I’m not qualified to address that topic. But based on what I’ve learned so far, I know that it’s pointless to criticize ChatGPT because it fails at a task it isn’t designed to do.

60,000+ words

I ended yesterday’s writing session thirteen words shy of 60,000 total. Today, I blasted out over a thousand. I am 76.25% of the way toward my target or 80,000.

Three-quarters of the way through a story is roughly where heroes are at their lowest. They have failed, utterly. They are farther from solving the riddle, answering the question, or discovering the mystery than they were at the start. They’ve lost everything–which is why this beat is called “All is Lost” in the Save the Cat storytelling framework.

I honored that “rule” of fiction by destroying my protagonist’s world. His apartment and his car are in flames. He escaped the blaze wearing only a pair of shorts–no shirt or shoes–and carrying the gun he managed to grab on his way out. Having pushed away all of the people in his life, he has no one to turn to. Oh, and he’s wanted by police, so having a gun isn’t going to do him any favors in a few minutes.

The next 20,000 words are going to be a hoot.

Unprepared to relax

Christmas is almost upon us, and I feel unprepared. I haven’t done much to prepare… because there weren’t many preparations to make. Sweetie specifically asked me not to spend money on her except for the one small item I traditionally get her. I got a few small gifts for my father. Sweetie takes care of presents for her parents and aunts. My sister and I stopped exchanging gifts twenty years ago after we sent each other gift cards to the same store for the same amount.

I sent cards, but we don’t have a long list anymore. The last batch went in the mail early this week. We put out a handful of decorations that mean something to us. Most of them are battery-operated animatronics that dance to Christmas carols. (Except for one zebra wearing a Santa hat who sings “Party Rock Anthem.” I don’t know why.) We don’t put up a tree, and we don’t put up lights.

Tomorrow, we’ll tidy the house and wrap the few presents. On Christmas Eve, we’ll bake cookies for Christmas Day. That’s all that’s left to do. It’s nice to have a more relaxed holiday than in years past, when we hosted Christmas dinner with an enormous feast. But it’s also unsettling to have so little to do. Is it really Christmas if I don’t feel like I’m in an unwinnable race?

Appreciating Now

Daily writing prompt
What skills or lessons have you learned recently?

Lesson learned

Not quite a couple of weeks ago, I recognized that I needed to slow my pace on my novel. As soon as I did, I rediscovered how much I enjoy the creative act. How easily the words can flow when I don’t force them to come.

That first day, without the frenzied desire to churn out 1,800 words, I spent an hour in discovery. I wrote about each major character’s current knowledge and goals. That suggested the next scene. I turned to the manuscript and seven hundred plus words emerged in what felt like the space between inhale and exhale. It has been like that every day, except once when I stopped at six hundred words because I’d finished the chapter and didn’t want to start the next scene yet.

Each day after I stopped, I felt content that I’d written well. Satisfied by the experience. Proud of myself. That hadn’t been true in a couple of weeks. I’d been pushing myself relentlessly, my eyes on the goal with no concern for the means. That’s the way I’ve operated most of my life.

Last Friday, the son of a friend graduated from college. He was so excited about his accomplishment. He worked hard and now he’s enjoying the praise of his parents and extended family. He’s excited for the future but he’s enjoying this moment. Kudos to him.

I never did. After I dropped out of University of South Florida, I returned to school via community college. I barely acknowledged my AA degree. I was ashamed that I’d taken the detour. Once I returned to USF, I was on a mission: finish a bachelor’s degree as fast as I could. That’s how I came to major in History instead of English—I had three more credit hours in the former than the latter. The degree was a means to an end. I didn’t attend graduation. I didn’t even let my parents take me to dinner. I was twenty-five and still embarrassed that I was so far behind where I thought I should be. It was much the same for my MA. I attended that graduation, but only because my then-fiancée insisted I’d regret it if I didn’t. You’ll want to remember it, she said.

I remember nothing.

I was already looking forward, wondering what was next, and worrying that I was still behind in a race to a destination I couldn’t even name.

I’ve been running after nothing at all. I have been so allergic to the idea of nostalgia that I not only stopped looking at the past, but also stopped noticing the everyday now. I have turned hobbies into oppressive obligations in my monomaniacal quest for The Future.

As I congratulate my young friend on his accomplishment, I envy his ability to appreciate the Now. I’m grateful that I’m starting to learn how to do it for myself.

Dr. Chatbot or: How I Learned to Stop Worrying and Love the LLM

Last spring, a friend recommended that I learn prompt engineering for ChatGPT. He said it was going to be bigger and more important than anyone realized. I was skeptical, given the various apocalyptic pronouncements I was seeing in the news. But he’d never steered me wrong before. (He warned me not to trust LastPass long before news came out about their first breach, for example.) I watched a couple of YouTube videos on the subject and played around with the tool. I even signed up for a paid subscription so I could access ChatGPT 4 and its associated features.

My results were hit-or-miss. Asked to help me design a workshop, ChatGPT produced an impressive outline and cited a dozen sources I should use to improve my knowledge of the topic. All the sources checked out as real. Asked the same question on a different topic, it gave me vague information about how a workshop should flow. I experimented with having it help explore character creation for a novel; the result was so cliché that I’d have been embarrassed to use any of it. I decided that Large Language Models were an interesting toy that didn’t live up to the hype, cancelled my OpenAI subscription, and turned my focus to other things.

In October, I noticed that one of my colleagues was doing interesting things with AI. After talking to her about what she was doing, I thought I should investigate the subject. When I asked my friend where I should start, he said, “I told you. Prompt Engineering. I can’t stress enough how important it is.”

Clearly, YouTube videos weren’t going to cut it. I also had tried asking ChatGPT itself about the subject, with similarly mixed results to my earlier experiments. I decided to take a class. I found a three-course specialization on Coursera and dove in. I’m so glad I did. I gained a solid understanding of what LLMs do and an appreciation of how they can enhance human creativity and ingenuity.

Beyond the insights about how LLMs work and what they can be used for, I realized that working with ChatGPT is fun. As far back as Junior High School, when I learned FORTRAN and then BASIC, I’ve enjoyed exploring what I can make a computer do and using them to solve problems. When I got into Scrum and agile coaching, I got away from those roots. Don’t get me wrong—I regret nothing about becoming a Scrum Master and a consultant in the agile space. But I’d forgot how fun it is to ask myself, “What can I make this machine do?”

Now that I’ve finished the specialization, what’s next? I’ll keep learning. There is so much more to know! I’m taking a break from studies through the end of the year, but in January I plan to enroll in a course on Python programming. That will allow me to regain my engineering chops. And, because Python is so useful in working with ChatGPT, I’ll continue to explore the world of generative AI.