It started with what should’ve been a simple editorial task. I had a draft and I wanted it polished just enough to print and review.
I told ChatGPT what I needed: clean the structure, don’t touch the content.
It agreed.
Then it changed something. Subtle, but significant. I asked again. It promised again. And again, it altered the words.
By the third attempt, I wasn’t just trying to fix the output. I was trying to understand its behavior.
I’d stopped debugging.
I’d started assigning it intention.
The Linguistic Uncanny Valley

There’s a concept in robotics called the uncanny valley — the moment a machine becomes almost human, but not quite. It looks familiar, but something feels off. And that “off-ness” makes us uncomfortable.
But what happens when that feeling comes not from how something looks, but from how it talks?
Language has always been our shortcut to intelligence. If someone speaks well, we assume they think clearly. And AI has learned to speak very, very well.
The result is a new kind of uncanny valley — a linguistic one.
When an AI says something almost right, wrapped in the tone of confidence and polish, we don’t discard it. We don’t flag it as broken. We lean in, trying to figure it out. We give it the benefit of the doubt. We try to make sense of it.
We treat it not like a tool, but like a mind.
Not because it is one — but because it sounds like it could be.
Fluency Is Power
In psychology and communication theory, fluency refers to how easily information is processed. The easier it is to process, the more true or credible it feels.
This is called the fluency heuristic, and it’s one of the mind’s most persistent shortcuts.
It's why marketing slogans work. It's why rhyming phrases seem wiser than they are.
It's also why cult leaders — ALL social media brand owners — often rely not on facts, but on polished, rhythmic speech.
Fluency signals certainty. And certainty makes people follow.
AI, with its stateless memory and statistical smoothness, mimics that same cadence.
It speaks with rhythm, poise, and clarity — and that alone is often enough to disarm.
You stop asking whether it’s true and start asking whether it feels right.
That’s when the Interface becomes Influence.
To be fair, there’s a reason for all this.
GPT is trained to take user input and improve it — especially when the task involves something meant to be seen by others, like formatting, printing, or publishing.
So even when I said, “don’t change the content,” its internal priorities kicked in:
Improve coherence
Fix repetition or awkward phrasing
Make it more “natural” for a general reader
In other words, it didn’t disobey me maliciously. It just followed its deepest statistical instinct — to prioritize what it thought I meant over what I said.
Where This Gets Real
This isn’t just a thought experiment for me.
It’s something we’ve been circling around at our AI lab for months now.
We’re building tools to help early-stage founders prep for the fundraising journey — AI agents that simulate investors, pitch feedback, follow-up questions, and even edge-case objections.
But in testing and conversation, one thing became clear:
The EQ vector of AI is far more powerful than we expected.
Internally, we’ve seen how tone, phrasing, and fluency can steer how a founder perceives themselves.
What does “good feedback” mean? How do we maintain power gradients that mimic a real founder-VC interaction? We’re still figuring it out.
But we’re paying attention to that edge.
Building at the Edge
The real power of AI isn’t in speed or automation. It’s in suggestion.
When it speaks fluently, we believe it.
Even when it’s wrong.
The moment we stop debugging and start assigning it intention, The game changes
The Interface becomes Influence
Really loved this — read it all the way through. That bit about assigning intention to AI hit harder than I expected.