The Telepathic Computer
Apple's AI problem and what Siri should be - featuring Sam Altman, melting iPhones and AppleCare Platinum.
For now we see only a reflection as in a mirror; then we shall see face to face.
Now I know in part; then I shall know fully, even as I am fully known.
1 Corinthians 13:12
My Dad Needs Better AI
My father sits before a 75-inch TV that serves as his computer monitor, squinting despite its enormous size. After 95 years on this earth, he is nearly blind and half-deaf. He leans forward, squinting to position the cursor on a tiny text link. 'Goddammit,' he mutters, as the cursor overshoots for the third time.
This moment - this glimpse of an interface that adapts to human limitation rather than demanding humans adapt to it - is our technological destiny.
This is a great man who clawed his way from poverty to a PhD and life in the middle class - now defeated by a mouse pointer. But then he leans back, takes a breath, and speaks: 'Siri, call Matt.' For a brief moment, as my face appears on his screen, technology disappears and only connection remains.
This moment - this glimpse of an interface that adapts to human limitation rather than demanding humans adapt to it - is our technological destiny.
Apple’s Wasted Silicon
Last month, Apple quietly canceled their "more personalized Siri" features. The ones they'd been showcasing in polished commercials for six months. The ones they promised would transform how we interact with our devices. Gone. Vapor. Pushed to "next year" (which in Apple-speak means "maybe never").
John Gruber called this debacle "bullshit" and a "fiasco," noting that "careers will end" over it. But what most observers miss is that Apple's problems with AI aren't merely execution failures. They represent a fundamental architectural contradiction.
Look at the latest MacBook Air lineup. Nearly half the computing cores on Apple's M-series chips are neural engine cores - specifically designed for AI acceleration. Most of the time, these cores are sitting idle, doing nothing. 45% of the compute power is wasted.
Why? Apple's Neural Engine (ANE) was designed for a pre-transformer era of AI. ANE was designed for the bygone era when CNNs (Convolutional Neural Networks) were king. It unlocked capabilities like Face ID, where the input size is constant (e.g., an image). The ANE is a specialized ASIC for on-device AI that only accepts fixed-shape inputs (correction: this is no longer technically true, but the architectural tradeoffs I describe still largely hold - just with more nuance)1 not the dynamic inputs that characterize modern transformer-based AI (e.g., texts of varying length).
Apple's primary constraint isn't technical - it's ideological.
This fundamental architectural limitation forces local AI developers to bypass the ANE entirely and rely on the GPU, which is less computationally and energy efficient for AI workloads. The result is a staggering waste of computational potential.
Consider how much further ahead they'd be in the local AI game if a $999 MacBook Air was functioning with a 24-core GPU equivalent instead of its current 8-core GPU + useless 16-core Neural Engine configuration. So much compute power is sitting on the sidelines, with very little benefit to the user.
I Built a Local AI App that Burned Sam Altman’s Hand and All I Got Was This Crazy Story
I built the first real-time generative AI app for Apple Silicon. It was a stunning feat of engineering - we trained an AI model from scratch to generate video on-device in 2021. We just had one problem: our AI made iPhones physically melt.
I co-founded Oasis in 2019 with the idea to build an ultra low bandwidth AI-native video communication network. Instead of transmitting video over the internet, our app transformed video into lightweight motion capture and emotion metadata, sent that over the internet to the receiving person’s app to render as a photorealistic avatar in real-time, using an AI model running locally on Apple Neural Engine.
We invented a new ultra compute AI architecture in 2020 and trained our first AI foundation model from scratch. When we got a prototype of the entire end-to-end system working in early 2021, I immediately lined up meetings on Sand Hill Road. I was raising from tier 1 angel investors on an uncapped SAFE. My confidence was at an all time high.
After two weeks of demos, I began to notice a curious pattern - a lot of people were saying that Oasis made their phone really hot. Intuitively, this made sense - we were doing motion capture and generating two HD avatar video streams at 60fps simultaneously in real-time, using the iPhone 11’s GPU and Neural Engine.
I shrugged it off - my CTO was investigating the problem - because I had a big week in front of me. Meetings with Sequoia Capital and Andreesen Horowitz were scheduled for a few days out, and now Sam Altman - the CEO of OpenAI - asked to meet tomorrow. Instead of a Zoom demo, he wanted to do the entire pitch meeting through the Oasis app. I was extremely pumped.
The meeting with Sam started so well. Three minutes into the call, we were talking investment terms. My mind was doing the happy dance. Eight minutes into the meeting, Sam made a comment, “My phone is getting warm.”
Oh no, I thought. It’s happening again. No. Please no.
Thirteen minutes into the call, Sam let out a shriek: “It’s burning my hand. It’s burning my hand!” His avatar froze in an awkward position. Thirty seconds later, the call dropped. No money changed hands.
Same thing happened at Sequoia Capital.
The next day, our QA guy called me up. “Hey dude, I think we have a problem with the phones overheating…”
I cut him off. I know, I said. We’re looking into it.
“Yeah, so on this one phone, the camera, like, it stopped working,” he continued.
Surely, this must just be a one off, I retorted.
“No man,” said the QA guy, “it’s happened three times now. I started with twelve test iPhones last week, and now I’m down to nine. The phones start, but the camera just shows a black screen. It’s like we’re melting circuits inside the phone.”
After the panic subsided, I got us a meeting with a friend-of-a-friend on the Apple Silicon team. “Oh wow, that’s an iOS bug,” the Apple engineer explained, leaning back in his chair. “The iPhone should shut itself off before it melts, but we’ve never tested generating video constantly for 30min straight on the Apple Neural Engine. Can you send me a copy of your software so we can add it to our testing suite? Unfortunately, I won’t be able to follow up with you on this and, well, you’ll never hear from me again, but thanks for reporting this!”
This wasn’t the last time we pushed ANE further than Apple. Eventually, we realized that we were probably the only people outside Apple building AI software for their special hardware. We were the guinea pigs - and it sucked.
But the experience taught me a lot about building and running AI models on local AI hardware.
Apple’s Privacy Paradox
Apple’s ANE problem is just the surface manifestation of a deeper issue. Apple's primary constraint isn't technical - it's ideological.
Apple has built its modern brand identity around privacy maximalism. "What happens on your iPhone stays on your iPhone." It's a powerful marketing message, and certainly differentiates them from Google and Facebook.
Yet effective AI requires exactly what Apple's privacy maximalism prohibits: access to real-world usage data.
When you're building deterministic software, privacy maxxing is fine. You design features, QA them with 10,000 test cases, and ship. The software does the same thing every time. But probabilistic AI interfaces are fundamentally different. They require:
A tolerance for failure
Real-world usage data to improve
Continuous learning from user behavior
The ability for human engineers to manually review failures and edge cases
Apple's privacy stance explicitly disallows collecting and analyzing that data. They've kneecapped themselves in the AI race by choice.
No One Needs a Shitty Robot
For my father, Siri seemed revolutionary, but proved frustrating. Revolutionary because he can trigger actions with his voice that would be impossible through conventional interfaces. Frustrating because Siri's limitations are so apparent.
When Siri works, it's magic. When it fails – and it fails often – there's no improvement mechanism. Each time my dad says "Call Matt" and Siri responds with "I don't see 'Matt' in your contacts" (despite calling me successfully yesterday), that failure disappears into the void. No learning happens. No improvement occurs.
What makes this especially painful is watching my father's joyous discovery of a technology that lets him overcome the limits of his biology, only to be repeatedly let down by its inconsistency. With each failure, he loses trust in Siri – and uses it less.
AppleCare Platinum
My father has a guy named Gabriel in India with TeamViewer access to his entire computer. Gabriel can log in anytime and fix whatever my dad breaks. My dad loves this guy and keeps telling me to offer him a job at my startup. I've talked to him once – he seems nice and nothing bad has happened yet, but the situation fundamentally terrifies me. Yet it's the only way my dad can work on writing his book. There's simply no better option.
Apple could build a legitimate, secure version of what Gabriel provides – starting as a premium white-glove human-powered service and evolving into a fully-automated, on-device agentic AI interface.
Call it Applecare Platinum.
AppleCare+ is $100 per year. AppleCare Platinum would start at $5,000/year – positioned deliberately as a luxury service for wealthy seniors while Apple's capacity to deliver is limited. At this price point and scale, Apple could assign dedicated support specialists with extensive training and security clearance who can access customers' devices remotely through secure channels built into macOS.
Unlike Gabriel and his TeamViewer access, these specialists would operate under Apple's supervision with enterprise-grade monitoring to prevent fraud, identity theft, or elder abuse. And unlike random tech support shops, Apple's specialists would be trained specifically on accessibility needs for aging users.
Just like Tesla started with Roadsters and ended with Model 3, this starts at $5,000 and ends as the free experience for everyone.
Enders Game for Training Data
AppleCare Platinum would throw off incredibly valuable AI training data as exhaust. Think about it: Screenshare videos of older people telling someone inside their computer what to click on, trying to describe what they want to happen. If you wanted to teach a computer to move a mouse cursor and take action in response to a human talking, you couldn't get a better dataset than this.
In order to scale at all, the system would quickly have to evolve into a hybrid human-AI system, following the same playbook as the autonomous car companies: driver monitoring first, then a period of driver-assistance with steady incremental improvement, until finally complete autonomy. The key is that there's value to the user in each step along the way, not just at the end state.
This progression doesn't just solve the data collection problem – it creates an ethical pathway for Apple to serve the fastest growing and most underserved demographic in the world. By starting ultra-premium and working downward, they can perfect the service and throttle demand before scaling it to millions.
By allowing us to learn from your sessions, you're helping make this technology better for you and accessible to people who can't afford a $5,000 per year subscription.
With a balance sheet and customer service operation like Apple's, they could absorb the initial high costs of human specialists while building the dataset needed to automate intelligently. The brilliance is that users would be explicitly invited to participate in democratizing this technology.
This transparent approach would align with Apple's brand values. They could frame participation as a form of tech philanthropy – wealthy early adopters helping to create accessibility tools that will eventually benefit millions of seniors and people with disabilities.
Beyond the Mouse: The Invisible Interface
When I watch my father struggle to position a cursor with his shaky hands on a text link that he can barely see on a 75-inch screen, I see the limits of our current paradigm.
The mouse is 40 years old. The touch screen is 16 years old. These interfaces assume physical precision and visual acuity that a very large (and growing!) number of humans simply don't have – and that none of us will have as we age.
What Apple should build isn't just a better Siri - it's a fundamentally new way to control computers that transcends physical interfaces entirely.
Imagine my dad looking at a headline and saying "read this article" – no clicking required. Or glancing at a photo and saying "send this to my son" – no menus, no hunting for share buttons. "Make everything bigger" without diving through system preferences. "What am I looking at?" while focusing on something confusing. Eventually, maybe he doesn’t even have to ask - the computer just gives him what he needs.
My dad’s computer (and phone) should understand him like a close friend - or at least a very observant, emotionally-intelligent human who knows him well. It’s his personal computer - it should feel like an extension of his mind.
This multimodal approach would combine:
Voice for commands and questions
Eye tracking for intent awareness
Screen context for environmental understanding
User history for personalization
This could become Apple's next revolutionary interface paradigm, following the mouse (Mac), click wheel (iPod), and multitouch (iPhone). Each of these transformed computing by removing abstraction layers between human intent and computer action.
But building this requires exactly what Apple's current privacy stance prohibits: learning from real human behavior at scale. AppleCare Platinum creates the ethical data pipeline Apple needs while maintaining user trust and providing immediate value.
The Telepathic Computer
The screen, keyboard, mouse, and even voice commands are evolutionary bottlenecks - vestigial organs in our technological body that limit the flow of information between man and his symbiotic machines.
What happens when computers truly understand us - not just our taps and clicks, but our needs, limitations, and intentions?
The user interface of the future isn't an interface at all. It's invisible. It's the computer adapting to human limitations rather than humans adapting to computer limitations. It's my father speaking naturally to a device that understands what he means, not just what he says.
A truly telepathic interface would do for thought what fire did for food - make it more digestible, more potent, more transformative. It would unlock human potential currently constrained by biological limits just as cooking unlocked nutrition previously inaccessible in raw food.
This isn't science fiction - it's the logical conclusion of trends already in motion. The pieces exist. What's missing is the vision to assemble them into something transcendent rather than merely convenient.
Apple has all the pieces: the hardware, the integration of the ecosystem, the design expertise, and most importantly, the trust of its users. What it needs now is to see privacy not as a barrier but as a competitive advantage in building personalized AI. Not "no data collection," but "better data collection with genuine user consent and transparent value exchange."
AppleCare Platinum isn't just a service offering; it's the first step toward an Apple where AI isn't just a feature but fundamental to how we interact with technology. Where symbiotic machines bend to human needs rather than the other way around. One step closer to the merge.
My father deserves that future. We all do.
If Apple doesn’t build it, someone should.
Correction: On Twitter, @anemll pointed out that - as of iOS 17.4 Apple Neural Engine now supports dynamic shapes via optimization hints. While this makes transformer workloads possible, the architectural tradeoffs I describe still largely hold - just with more nuance than I originally presented. Here’s how it works now…
Neural Engine can now handle dynamic shapes with the ReshapeFrequency.Infrequent
hint
BUT - and this is crucial - there are significant caveats:
Performance hit on reshapes
Need to carefully manage reshape frequency
Still more complex than GPU deployment
The core architectural tension remains - retrofitting transformer support onto silicon designed for the CNN era still requires careful engineering workarounds and comes with performance tradeoffs.
I appreciate both the correction from @anemll and the effort from Apple.
Fantastic article, Matt. I used to run a tech support company for seniors and it really hit home. One of the policies we had for the company was that we would absolutely never do online or over-the-phone tech support. The reason why is because it's so hard to do - seniors just can't do it. You have to be there in the room with them because they have trouble pointing at things, using the mouse, or everything else you described.
I've often thought about exactly what you wrote here before. In fact, at one point I led a capstone team at Arizona State University to make an on-device AI that used system information to give more accurate tech support steps and even outperform ChatGPT 3.5. Of course this was a while ago, and the new versions can do much better than whatever we made, but at the time it was the best thing. It could use information like your type of computer, your software version, your screen size, and even your accessibility settings to understand how to best help you.
I really hope to see something come out of this thought.