Google has spent decades optimizing their infrastructure and they bought up a bunch of dark fiber in the early 2000s, plus they have their own AI chips.
Perplexity runs on Cerebras, which makes ultra fast AI inference chips.
ChatGPT runs on NVIDIA GPUs, which is slower but more abundantz
The alternative is shifting focus from model disagreements to data integrity. Instead of expecting models trained on random data to magically become reliable, companies should invest in building proprietary, domain-specific datasets with verified accuracy. Prioritize deterministic systems over probabilistic ones where possible.
Your Hyperchat tool proves this - these models contradict each other because they're all working from disparate and often unreliable training data. When we scrape data, it takes an incredible amount of post-processing.
We've seen this before. In derivatives, everyone chased the next hot pricing model, but clean data won (Bloomberg's advantage). While others obsess over model parameters and architectures, our startup is focusing on a hyper-refined, curated dataset. Plus, everything breaks when models get updated, and proprietary data assets add more enterprise value in the long term.
I’ve liked using TypingMind for this, I can add all the LLM’s I want even from OpenRouter and see them side by side. Have you tested it?
I have not.
Hyperchat is an interesting idea… I always like to compare and evaluate the quality of AI output so this is 💯💯
Thx! Take it for a spin.
LMK how it treats you.
Definitely doing that💯
I’m in the same boat. Any plans for a Windows version?
Maybe!
Right now, it's just personal project.
Hmmm🤨 so no AGI just yet. Great analysis. Wondering whether the speed also reflects efficiency and energy usage, or something else altogether.
Google has spent decades optimizing their infrastructure and they bought up a bunch of dark fiber in the early 2000s, plus they have their own AI chips.
Perplexity runs on Cerebras, which makes ultra fast AI inference chips.
ChatGPT runs on NVIDIA GPUs, which is slower but more abundantz
Also, if AGI = human-level intelligence, last I checked, humans still have these same problems.
I’ve started to explore Pal Chat, but will try this now. I’m w you on randomness - it’s scary.
What’s Pal Chat?
Saw it on X https://apps.apple.com/my/app/pal-chat-ai-chat-client/id6447545085
Why hyper-focus on the models when the underlying data is so fucked up.
And now that it’s getting walled, it’s going to get worse.
What’s the alternative? Sorry, I’m not following.
The alternative is shifting focus from model disagreements to data integrity. Instead of expecting models trained on random data to magically become reliable, companies should invest in building proprietary, domain-specific datasets with verified accuracy. Prioritize deterministic systems over probabilistic ones where possible.
Your Hyperchat tool proves this - these models contradict each other because they're all working from disparate and often unreliable training data. When we scrape data, it takes an incredible amount of post-processing.
We've seen this before. In derivatives, everyone chased the next hot pricing model, but clean data won (Bloomberg's advantage). While others obsess over model parameters and architectures, our startup is focusing on a hyper-refined, curated dataset. Plus, everything breaks when models get updated, and proprietary data assets add more enterprise value in the long term.