How to work with any generative AI
Switching from ChatGPT to Claude to Gemini to Grok, or to any open source model, shouldn't mean relearning how to prompt, what to trust, and what's safe to hand off. Under the hood they share the same handful of traits, and the same few habits get good results from all of them.
Anthropic teaches both in two free Skilljar courses, AI Capabilities and Limitations and AI Fluency Framework Foundations. The traits have been ML vocabulary for years, and the habits are what good prompters have done all along.
How chat LLMs actually work
Every chat LLM has the same handful of properties under the hood. Name the one that just bit you and the fix usually follows.
- Next-token prediction. The model guesses the next bit of text from what came before. It isn't looking anything up. Treat the output as a draft, verify anything that matters.
- Knowledge. The model only knows what it was trained on, and only roughly. For anything recent, niche, or private, paste in the source instead of asking the model to recall it.
- Working memory. Context windows are finite. Don't dump whole repos. Trim, summarize, or chunk before pasting, and avoid burying important content in the middle of long prompts.
- Steerability. Models respond to clear instructions, examples, and structure. If the output is wrong, fix the input first. Show the model what good looks like.
- When properties collide. A strength in one setting becomes a weakness in another. Watch for prompts that ask for creativity and strict format at the same time, the model will usually drift on one to serve the other.
Naming which property went wrong (the model forgot the system prompt, it made up a function, it ignored the format) tells you which lever to pull next.
The 4Ds, a way to work with any AI
Professors Rick Dakan (Ringling College) and Joseph Feller (University College Cork) built the AI Fluency Framework around four habits, the 4Ds.
- Delegation. Decide what to hand off and what to do yourself. Anything that needs real judgment, hidden context, or facts the model can't check is usually a bad fit, no matter which tool you use.
- Description. Tell the model what you want, clearly. Format, examples, who it's for, what to skip. Good prompts work everywhere.
- Discernment. Read the answer with a critical eye. Every model can be wrong with full confidence. The course pairs Description and Discernment as a loop, you ask, you check, you ask again.
- Diligence. Own what you ship. Check the facts, respect privacy, say when AI helped. That's how you should work, not a feature of any one tool.
Why this beats the provider debate
Betting on "I'm good at ChatGPT prompts" or "I know the Claude tricks" means relearning half of it every time your team switches tools. The leaderboard shuffles every few months. The traits and the 4Ds don't, they keep paying off through every release.
Both courses are free and short. Take them in order, Capabilities first, Fluency second.