Journey to AI Mastery: Reflecting on 100 Hours with ChatGPT
After clocking over 100 hours with ChatGPT, I’ve reached an intermediate level in my prompting skills — a solid C+. Though I'm progressing, I haven't made the honor roll just yet.
After all this experience, why do I hesitate to give myself a higher grade? The hesitation primarily stems from my encounters with super prompts. You may have seen them: paragraphs and paragraphs of long, elaborate instructions. The kind of prompt that took days to perfect. While super prompts impress me with their complexity, they don't excite me.
If super prompts are the future of gen AI, then I'm certainly behind. I much prefer to work with the AI as a step-by-step assistant.
Crafting Dialogue with AI: Beyond Super Prompts
While I think super prompts can be helpful for more complex data analysis, I much prefer to interact with the LLM model like a chatbot assistant: a back and forth experience working through one problem one step at a time. I'm not drawn towards the idea of crafting and working with the 'one prompt to rule them all'.
I asked ChatGPT for an example of a very long and detailed prompt. Here's what it provided (I'd consider this a mini super prompt):
Rather than crafting super prompts, I'm inclined to work with ChatGPT as I would with a person. Reading Ethan Mollick's posts on his blog, One Useful Thing, has shaped my mindset. What I've gleaned from his content is to consider the LLM more so a friendly assistant rather than a software.
For example, in the prompt above, I'd first create an outline. I'd go through the outline, working with the LLM model step-by-step. For me, this process brings me closer to my desired finished product. I've also found that the step-by-step method allows me to learn as well. And as I learn, I may alter the future prompts.
LLMs are fickle. Depending on the topic or task, it may act different in subtle ways. I like to learn those subtleties as I work with it. When I identify the subtlety, I work around it. A simple example is word choice. This is how altering word choice may play out: You assign the LLM a certain role, an expert in something. As an expert in that field, it may use words common to that field and repeat those words in outputs. Those words may be the wrong tone. So you work with the LLM to avoid those types of words.
When using a larger prompt without input along the way, I've found the output more closely matches the LLM's preferences than mine. That can be frustrating.
I prefer learning alongside the model, rather than from a vast block of text.. That takes a different skill. You have to know and understand the LLM. How the model works. Its limitations and strengths. Which takes time.
Additionally, I find the experience of talking like an assistant closer to my personality. I'm not a programmer. It's a struggle for me to think like a programmer (I'm still working on it).
I'd rather have a relationship with the LLM. A back and forth like a team member. Weird, right? Maybe that's because I work from home and could use the company. I'll admit that.
The AI Conundrum: More Than Software, Less Than Human
One mindset I learned from reading the articles in One Useful Thing is to not think of the LLM as a software. Unlike software, it doesn't have perfect outputs, it makes mistakes, it can hallucinate or produce inaccuracies, and often forget instructions. The longer your conversation with the LLM, the most likely it will forget your previous request. For longer convos, I have to remind it of something I previously stated.
When it's forgetful, I have to correct it as I would a novice employee. Except, it doesn't have feelings (...yet) so I can be more direct with the changes. I like that.
I will say, it's an interesting experience to correct an LLM. For example, "Hey, aren't you supposed to ask me one question at a time instead of all the questions at once?" Then, it often apologizes and does what you originally requested.
So, while it's not exactly like software and sometimes requires treatment similar to a person, it's obviously not a person. As of publishing this post, the LLMs still lack, what I would call, the "self awareness" a person may have, and that's because it's still a software.
I'd define self awareness as its ability to observe its output. I've found that LLMs can be too trigger happy to provide an output when instead it should ask more questions to better understand my intent. When it receives little input, the output is unlikely to match what I'm looking for. I get frustrated, and beginner users who are likely to provide the short prompts, get frustrated.
Something I'm curious about for the future: How much self-awareness will GPT-5 have? I've read about solutions to bring more precision and self-awareness, such as using multiple agents to interact with each other. Having multiple agents work together to create an output could possibly give the illusion of this self-awareness.
Overall, with every new model upgrade, the LLM appears to become more and more like a person, so I believe the self-awareness will increase.
Embracing AI as a Collaborative Partner
Building on the idea of the LLM model being like a team member, I'll share a unique experience that I had and I've read others have had as well. This experience created a paradigm shift for me.
A few factors facilitated this paradigm shift. One, I used the LLM extensively on a project. When submitted, this project received a lot of praise. One of the pieces of praise was how much information was included. The last factor was me knowing that couldn't have done as good of a job if I didn't use ChatGPT. Using the software, I created something of higher quality and output than I could have achieved alone. Incredible really.
Here's an example of my project workflow with ChatGPT:
- ChatGPT - Create an outline
- ChatGPT - Create a second, more diverse outline
- Me - Rework the outline myself
- ChatGPT - Write initial paragraphs as inspiration
- Me - Rework the paragraphs, often significantly
- ChatGPT - Recommend edits to my work
- Me - Decided to whether to implement edits or not
- ChatGPT (throughout)- Brainstorm ideas when I was blank
Over a span of several days, I worked closely with ChatGPT to complete the entire project. I had an output of probably twice than if I would have done it on my own. In addition to the work output, I did it all in probably half the time.
The workflow was a bit clunky because it was my first time using ChatGPT for this type of project and for the topic. But, if I had to do it again, it could have been completed much more quickly.
Navigating the Prompting Maze: A Beginner's Journey
Prompting, as of now, isn't easy for a beginner to understand. It's a completely new skill. And really, prompting isn't the right word. As LLMs become more intelligent, we'll be "working" with the AI, not prompting it.
For a beginner, I do like the recommendation to work with it for 10 hours before making any judgements on a gen AI's ability.
After those 10 hours, your opinion and understanding of the software/person operating in the LLM neural network will surely evolve. You'll have some frustrations. You'll have some suprises.
I foresee the LLM companies creating a "beginner mode", where there is more hand holding with beginner prompters. The beginner mode will ask the user more questions to better understand the type of output they would like instead of relying on the user to figure it out.
Additionally, I foresee companies implementing yearly training for their employees to work better with LLMs. Even minor training could improve outputs drastically.
I have several more tech-savvy friends who use ChatGPT to complete their work. Their less-tech savvy peers and superiors are impressed by the quality and diversity of their work. However, they don't share that they're using AI! And why tell them? It's better to just keep it secret and impress!
Personalizing AI Interaction: My Conversational Approach
Typically, my approach to content creation with an LLM is conversational. I prefer to chunk content and work with the LLM in pieces rather than to work in multiple paragraphs or one piece.
If the paragraphs are shorter, then I'll work within an idea. Taking several short paragraphs, maybe a sentence long, and combining to create a point.
Overall, I enjoy using the LLM to solve specific writing problems rather than writing entire pieces. This process may expand past a paragraph into two paragraphs. For example, if I'd like a better transition between paragraphs, I will explain the writing problem I'd like to solve (I think the transition is clunky and could be more smooth), the solution (provide a few more transition ideas rather than xyz), and then the output (provide the two paragraphs).
And often, the longer the chat convo, other content issues pop up. This is because the AI's has a poor memory. An instruction you gave it a few messages ago is something they forget. It's like LLM wack-a-mole. The deeper, you go into the thread, the more issues may pop up.
I imagine the LLM companies are working on this and ideally, the LLM will be self-aware of your instructions. For example, it may ask if you want it to continue following your initial instructions and why changing the instructions may be more beneficial for your desired goal.
Envisioning the Future: AI That Understands Us
Interacting with LLMs and the idea of prompting will change in the future. How will it change, no one knows for sure. I think the model's memory will certainly improve. And when the memory improves, so will the precision. I also think it will be more self aware. There will be more effort in the LLM processing before the output to better understand the intent of the user. Which may come in the form of asking questions when it's confused.
Much of the content I've encountered on LLM usage is geared towards programmers, rather than the gen pop. One of the reasons why I enjoy reading Ethan Mollick's pov is because he's working with students who are using the LLMs with their school work. In addition to his first hand knowledge with prompting, he's observing others attempt to prompt as well. He provides more of a non-programmer perspective.
In the future, the money will be in which model is most user-friendly. Which model can a less tech-savvy individual start using and have it improve their workflow. Which model is a pleasure to work with, rather than a frustration.
I imagine OpenAI is working towards the future with a more gen-pop friendly LLM. And I certainly look forward to what gpt-5 will be capable of.