why better vocabulary won't fix your ai prompts
the course promised “chatgpt mastery,” “godlike prompts for writing viral content.” so i entered my card details.
copied the first prompt exactly. watched it fail. adjusted the phrasing. watched it fail differently. an hour later i’m staring at generic outputs wondering if i’m stupid or got scammed.
neither, as it turns out.
the template worked perfectly—for the person who wrote it. their expertise. their problems. their outcomes.
when you copy someone else’s prompts, you’re essentially copying their consciousness. the prompts worked for problems i didn’t have, with expertise i didn’t possess, for outcomes i didn’t need.
essentially, i’ve bought a thousand dollars worth of keys. all perfectly crafted. all for locks i don’t own.
the templates weren’t bad. my understanding of language was broken.
in 1936, philosopher ludwig wittgenstein describes a builder and his assistant.
builder calls “slab!” assistant brings slab.
builder calls “beam!” assistant brings beam.
that’s their entire language. no grammar. no syntax. no dictionary definitions. just patterns of use producing coordinated action.
people would dismiss this as primitive language, but wittgenstein saw something else—that all language works this way.
even sophisticated political discourse is elaborate language games where meaning emerges from patterns of use in context. no words reference reality directly. they create reality through patterns of coordinated action.
the success of llms vindicates the use theory of meaning. transformer architectures were the breakthrough precisely because attention mechanisms allow conditioning on contexts, letting models infer meaning from use patterns.
— samuel hammond
seventy years after wittgenstein. the machines proved him right by learning language purely through usage patterns. no semantic understanding. no conceptual grasp. just that this token follows that token in these contexts with these probabilities.
when spanish conquistadors first appeared on aztec shores, the aztecs couldn’t see their ships.
not metaphorically. they literally couldn’t perceive them.
there was no conceptual framework for “floating wooden buildings.”
“something wrong with the sea,” they reported.
without words for something, you cannot fully experience it. language doesn’t label reality. language constructs what reality you can perceive.
imagine explaining to an aztec that the problem isn’t in the sea. that it lives in their vocabulary.
now imagine someone explaining the same thing about your shitty prompts (i just did).
you’ve spent your life believing language references reality. that words point to objects like labels. that words like “professional” and “compelling” mean something specific ai should understand.
language constructs reality through patterns. you’ve been constructing your reality through every sentence you’ve ever thought or spoken without realizing it.
your prison isn’t ai. it’s in believing words mean what you think they mean.
my former boss assigned me: “create a thought leadership article for the company website proving our industry is flourishing.”
“x industry is flourishing.” this was literally the topic she wanted me to write on.
i asked: what’s the objective? what argument do we want to present? what mental barriers do we want to dismantle with this specific piece?
she repeated the assignment verbatim.
“that it’s flourishing. don’t you get it anchit!?”
clearly she’d typed “give me thought leadership blog titles” into chatgpt and received generic ai slop—all in the name of “content strategy.”
she didn’t fail at ai. she outsourced her thinking to it. then, blamed the tool for mirroring her confusion back at her. like a chef asking ingredients to taste better.
she played a language game with no rules. “thought leadership” has no shared meaning. not in her mind. not in ai’s training data. not in observable reality.
the phrase simply performed sophistication without communicating anything.
ai gave her exactly what she specified—nothing.
more precisely, it gave her the statistical center of all “thought leadership” content in its training data. billions of posts, collapsed into one forgettable example. the average. the generic. the slop. the thing that fits everywhere because it belongs nowhere.
when you type “professional,” ai doesn’t understand professionalism.
it calculates… given the word “professional” in this context, what words typically follow? billions of training examples vote. the most common words win. synergies, stakeholder alignment, paradigm shifts—the words that appear most often after “professional” in corporate documents.
the statistical center of everything ever called professional.
when you prompt vaguely, probability landscapes expand. when landscapes expand, outputs slide toward the center.
like asking “give me some food” and receiving the average of all possible meals. edible. forgettable. and satisfying to no one.
you’re not failing at finding the “right” words. you’re failing to architect the probability space ai navigates in.
my boss couldn’t specify because she didn’t know what she wanted. ai just made that visible.
the machine literally forces clarity.
you cannot specify precisely without understanding precisely. cannot define success without knowing what success means. cannot construct the reality you want without knowing what reality you’re constructing.
vague prompts reveal vague thinking.
three questions can help one architect the probability space before they start typing:
i. what specific outcome?
not “marketing advice” but “three retention strategies for b2b saas with 18-month sales cycles, enterprise clients, multiple stakeholders in procurement.”
i ran both versions.
vague prompt: “give me expert marketing advice for my software company to get x results.”
output: “focus on your target audience and create compelling content that addresses their pain points. consider using social media to build brand awareness and email marketing to nurture leads...”
you’ve read this sentence a thousand times—probably from a guy with a generic linkedinish smiley profile photo who helps founders build personal brands but has 100 followers like me.
it can apply to anything.
precise prompt: “three retention strategies for b2b saas with 18-month sales cycles, enterprise clients, multiple stakeholders in procurement.”
output: “strategy 1: create champion enablement kits—slide decks your internal advocate uses to sell renewal to cfo. include roi calculator pre-filled with their usage data.
strategy 2: map the decision committee six months before renewal. identify the new stakeholders who joined since purchase—they didn’t choose you, they inherited you. run ‘value confirmation’ sessions specifically for them...”
same ai. different probability space. the specificity narrowed the universe of possible responses until generic couldn’t survive.
ii. what context enables that outcome?
not company history. not mission statements. critical facts that shape the probability landscape…
…like current retention rate, customer decision-making structure, competitive position, constraints.
remember wittgenstein? meaning emerges from patterns of use in context. same words, different contexts, completely different meanings.
“water!” could be an order, answer, warning, or request.
context determines the language game you’re playing. information that shapes the prediction, not information that makes you feel understood.
iii. how will i recognize success?
vague success criteria: “write something helpful about customer retention.”
output: generic advice about being responsive and adding value. technically correct. practically useless.
measurable success criteria: “strategies addressing procurement delays, reducing decision-maker turnover impact, implementable without engineering resources.”
output: “strategies must: (1) address procurement committee delays, (2) reduce impact of decision-maker turnover during sales cycle, (3) be implementable without engineering resources, (4) show measurable roi within 90 days.”
it gives you specific tactics targeting each criterion. verifiable. actionable.
the criteria should be something you can verify objectively. not aesthetically. not intuitively. objectively.
most “essential” context is just ego protection. people want ai to appreciate their business sophistication, their journey, their vision.
unfortunately, the machine doesn’t care about your cute founding story. it might in the future. but right now, it needs exactly enough information to activate the right patterns.
everything else degrades the signal.
friedrich hayek discovered this in 1945 while studying economics.
the peculiar character of the problem of a rational economic order is determined by the fact that knowledge never exists in concentrated or integrated form but solely as dispersed bits of incomplete and frequently contradictory knowledge which all separate individuals possess.
knowledge is dispersed. your intent exists as scattered thoughts, vague preferences, unstated assumptions, contextual understanding you can’t articulate. ai’s patterns exist as distributed weights across billions of parameters.
coordination requires compression.
in economies, prices compress infinite complexity of products and services into single numbers everyone can coordinate around. the price of wheat contains embedded knowledge: weather patterns, transportation costs, storage capacity, seasonal demand, geopolitical stability. all compressed into $7 per bushel.
prompts are like price signals for ai cognition.
exactly what you do when you tell a barista “large oat milk latte, extra shot, not too hot.” that sentence compresses your coffee consciousness into tokens the barista’s pattern-matching brain can coordinate into actual coffee.
similarly, you’re compressing your dispersed intent into tokens that coordinate ai’s distributed patterns.
the more precisely you compress, the narrower the probability landscape, the less generic the output. the less “here’s your coffee” and more “here’s your coffee.”
token economics matters more than most realize.
every word costs computational resources. your carefully architected opening gets pushed out of working memory by rambling middle sections.
i have literally paid to get access to 2,000-word prompts that produce worse results than 200 properly structured words. not just because brevity is the soul of wit. but because information architecture determines what remains in working memory when ai reaches critical decision points.
the order of information matters more than the information itself.
you’re learning to architect probability spaces where intelligence emerges naturally.
most people spend their entire time trying new words for the same confused thoughts, wondering why ai mirrors their confusion with mathematical precision.
they believe communication is a natural human act that happens automatically.
communication is precision engineering. language is reality construction. every word you arrange creates possible worlds, narrows probability spaces, activates patterns in systems—human and artificial both.
the taoists had a word: wu wei. effortless action. not passive—aligned.
water doesn’t force its way downhill. it finds the path that already exists.
your prompts work the same way.
don’t force ai toward your outcome. architect the landscape so your outcome is the path of least resistance. probability flows downhill. just build the right hill.
wittgenstein discovered this in 1953 studying language games. hayek discovered this in 1945 studying price signals. and now, you can discover this through ai—the first technology that makes some form of consciousness construction visible in real-time.
your language prison has no walls. it’s just the belief that words mean what you think they mean, the assumption that better vocabulary can fix the architectural problems in your prompts.
you dismantle this invisible prison by realizing that language was always pattern architecture, that you were always a reality engineer, and you didn’t know you were constructing probability spaces through every sentence you’ve ever spoken.
but even after learning this, i was still stuck…
language was pattern architecture. i wrote precise prompts with clear outcomes and specific contexts. but they still felt dead. correct outputs, lifeless mechanical results.
something was wrong.
then a guy named christopher alexander made me realize something which i can only tell you tomorrow because i suddenly feel like going on a walk and want you to have something to look forward to.


Great insights about what's actually going on behinds the scenes of AI. I've spent some time experimenting with different prompts myself. Sometimes I find the best prompts come from asking the AI to create a prompt for itself and to ask me any questions for clarity instead of guessing.