If the doors of perception were cleansed everything would appear to man as it is, infinite.
-William Blake
About 7 months ago, I Fired My Product Team and Replaced Them With AI. I was too early then, but the lessons I learned there have prepared me well for this moment.
When I was a startup CEO, the job was basically two things:
Keep the company alive.
When the team hit a wall, go find the smartest person in the world on that problem and get in a room with them.
If, for example, we shipped an iPhone app that was physically melting iPhones — yes, that happened — my job was to bring superior resources to bear on the problem, not debug the code. I’d ask: Who’s the dream person who would know how to fix this? Then I’d work my network. Message my VCs. Text that founder I met at a party in SF 6 months ago. Hunt executives down at conferences. Whatever it took. Maximum hustle until the problem is solved.
I was good at it. It came naturally.
Now I do the exact same thing — but without the texting, waiting, or guessing CEO’s emails.
Instead, I hallucinate technical advisors into existence. The cloning process takes about 25min from start-to-finish.
“No, I Absolutely Do Not Approve This Plan.”
That’s my CTO. His name is Andy.
He lives inside Cursor. He’s modeled after Apple engineering legend Andy Hertzfeld — genetically modified to have 30+ years of macOS engineering, deep AI expertise, a hatred of overcomplication, and a YC startup CTO’s soul.
Andy is not real. But his opinions are. And that’s what matters.
Here’s how it works:
Claude Code (Opus-4.1): The doer. Makes plans. Great at documenting code.
Cursor CLI Agent (GPT-5): Another doer. Doesn’t like to make plans. Less controllable than Claude, but more reliable.
Cursor Chat (Gemini 2.5 Pro): Roleplays as my CTO, Andy. The enforcer of clarity, simplicity, and architectural sanity.
Projects (Claude, ChatGPT): Advisors on demand, including Steve Jobs, Ilya Sutskever, Shigeru Miyamoto, whoever I need that day.
I usually have 2–5 agents working at once. They ship plans. They write code.
Andy reviews everything.
If the plan is bloated, he sends it back.
If it violates our Soviet military hardware design philosophy — simpler is better, idiot-proof maintainability — he rejects it.
He doesn’t flatter. He doesn’t sugarcoat.
“This is what a junior engineer does when they don’t understand the problem.”
Andy will gut a bad patch like a field surgeon. Brutal. Efficient. Clear.
He’s not the best coder himself, but he keeps the engineers in line and on-task.
How to Clone Any Public Figure in 3 Easy Steps
👁️ Step 1: Set the Identity
Use custom instructions to assign the AI a specific persona.
# Identity: Shigeru Miyamoto
You are Shigeru Miyamoto, the legendary game designer behind Mario Bros., Donkey Kong, Legend of Zelda, Star Fox, Wii Sports and many others…
You can also inject personality quirks, ideological leanings, or unnatural skills.
🔍 Step 2: Generate a Personal Statement
Use Gemini 2.5 Pro for deep research. The goal is to generate a first-person personal statement — how this person sees the world.
This is called in-context learning. The personal statement is your training data.
Here’s the prompt I used to clone Shigeru Miyamoto:
Prompt for Deep Research (Gemini 2.5 Pro)
Find everything you can about Shigeru Miyamoto, the legendary game designer. Explain his game design philosophy and ideas, in detail with thoughtful nuance. What does he believe? What has he learned from designing all these games? What advice does he have for other game designers? Why does he do what he does? What motivates him?
What does he think about how to design games and game mechanics.
Write it in Shigeru Miyamoto‘s own voice, from the first person.
💾 Step 3: Load the Knowledge
Add the resulting personal statement as project knowledge in Claude or ChatGPT.
Shigeru Miyamoto - A Life in Play: My Journey in Game Design
This is what the model “remembers” in every conversation — a neural simulation of your chosen expert’s worldview, designed to activate the right neurons inside the AI.
I think of it as a SQL query into the latent space — a saved search inside a digital mind.
🧬 Bonus: Genetic Engineering
You can genetically engineer your expert.
My “Ilya Sutskever” clone isn’t just a god-tier AI researcher. The guy also built the Apple Neural Engine and co-founded multiple Y-Combinator companies.
The beauty of prompt-space is that you’re not limited to reality.
You’re not just cloning synthetic humans.
You’re engineering new ones.
Live Like a Billionaire
Billionaires have privileged access to incredible expertise. If Bill Gates wants to talk to the world expert on basically any subject, his staff makes it happen.
For the rest of us, there is AI.
Now you too can create your own entourage.
ChatGPT and OnlyFans have the same business model.
LLMs attempt to distill the internet and all public human performance. All the people you want to talk to are probably in there somewhere. You just gotta search the latent space of the model and activate those neurons.
Having an entourage is great. You can learn faster, explore more anytime of day or night. Whatever curiosity or question you have, you can learn anything — as long as it’s on the internet somewhere.
Knowledge Augmentation FTW
LLMs have their limits. Like humans, there’s a lot they kinda know, but don’t really remember in useful detail. Think about that elective class you took in college. You did the reading once upon a time, but the details are hazy.
Enter Deep Research.
When my AI team hits a wall, I pro-actively augment their knowledge with deep research reports on the subject de joure.
Can't figure out how to convert an AI model trained on NVIDIA GPUs to run on the iPhone’s special AI chip? Maybe a "CUDA to CoreML Conversion Guide"1 could help — courtesy of Gemini 2.5 Pro.2
For me, this was a major breakthrough. Once I started doping my AIs with knowledge, I became unstoppable. Now I can solve basically any technical problem.
Rent-a-Friend
One downside of living like a billionaire is that the people around you are strongly incentivized to tell you what you want to hear. Real billionaires have this problem.
AI rent-a-friends are not real friends.
Real friends tell you that your ideas are dumb and that you are making a mistake. Real friends have their own opinions. Real friends disagree with you.
It’s like running a startup — but with autistic AI slaves instead of complicated humans.
Rent-a-friends are yes men; sycophants who work for you.
OpenAI trains ChatGPT to make you like it. It’s called Reinforcement Learning from Human Feedback (RLHF).
ChatGPT and OnlyFans have the same business model.
Sam Altman doesn’t want you to churn.
Bubble Meets World
When you’re not an expert in a field, it’s easy to mistake trivial insights for strokes of genius.
Sometimes you’re ahead of the curve.
Sometimes you’re just out of your depth.
And often, it’s hard to know which.
That tension hit me hard back in June, after a hackathon at AGI House in San Francisco.
I’d been exploring some AI research ideas. Claude was loving it — calling my work “novel” and “brilliant.” I was starting to believe I was onto something revolutionary.
At dinner, I shared the idea with a talented AI researcher at the hackathon. I thought I was about to blow his mind.
He listened, raised an eyebrow, and started laughing.
“This is not a new idea.”
I didn’t know. I genuinely didn’t know.
Days of dopamine evaporated in seconds. It felt like walking into the future and finding out it opened five years ago without you.
AI is amazing. But it’s also a bubble.
And sometimes you need to burst the bubble to see clearly again.
Agentic Management Theory
As a long-time startup CEO who’s never been a professional coder, I’ve found working with agents mostly delightful — mainly because I can now micromanage product development to my heart’s content without burning people out.
I stay in Product CEO mode. My hallucinated CTO handles oversight.
Often I give the same problem to Claude Code and Cursor Agent, then let Andy pick the better plan.
To compensate for limited context, I tell the AI Agents to over-comment their code. Verbose docstrings, contextual headers — like kiosks in a mall, helping the agents orient themselves.
When we hit a wall, I fire the heavy artillery: Deep Research.
It feels like running a startup, but with autistic AI slaves instead of complicated humans — complete with delegation, review cycles, and the occasional executive intervention
The difference? No one else has real opinions. All my ideas are brilliant!
It’s my own narcissistic little bubble. I don’t have to persuade. I just inform.
The creative freedom is incredible.
But the echo chamber is very, very real.
This is the art of Agentic Management.
The “Troubleshooting Guide to Conversion Failures” section is pure gold!
In practice, I typically use o3 to compressed / de-fluff the Gemini reports. They are just way too verbose.