17 Comments
User's avatar
Jason Pulliam's avatar

I’ve liked using TypingMind for this, I can add all the LLM’s I want even from OpenRouter and see them side by side. Have you tested it?

Expand full comment
Matt Mireles's avatar

I have not.

Expand full comment
Samuel Theophilus's avatar

Hyperchat is an interesting idea… I always like to compare and evaluate the quality of AI output so this is 💯💯

Expand full comment
Matt Mireles's avatar

Thx! Take it for a spin.

Expand full comment
Matt Mireles's avatar

LMK how it treats you.

Expand full comment
Samuel Theophilus's avatar

Definitely doing that💯

Expand full comment
Himanshu's avatar

I’m in the same boat. Any plans for a Windows version?

Expand full comment
Matt Mireles's avatar

Maybe!

Right now, it's just personal project.

Expand full comment
Kevin Mireles's avatar

Hmmm🤨 so no AGI just yet. Great analysis. Wondering whether the speed also reflects efficiency and energy usage, or something else altogether.

Expand full comment
Matt Mireles's avatar

Google has spent decades optimizing their infrastructure and they bought up a bunch of dark fiber in the early 2000s, plus they have their own AI chips.

Perplexity runs on Cerebras, which makes ultra fast AI inference chips.

ChatGPT runs on NVIDIA GPUs, which is slower but more abundantz

Expand full comment
Matt Mireles's avatar

Also, if AGI = human-level intelligence, last I checked, humans still have these same problems.

Expand full comment
Madhu Chamarty's avatar

I’ve started to explore Pal Chat, but will try this now. I’m w you on randomness - it’s scary.

Expand full comment
Matt Mireles's avatar

What’s Pal Chat?

Expand full comment
Michael Aiken's avatar

Why hyper-focus on the models when the underlying data is so fucked up.

And now that it’s getting walled, it’s going to get worse.

Expand full comment
Matt Mireles's avatar

What’s the alternative? Sorry, I’m not following.

Expand full comment
Michael A.'s avatar

The alternative is shifting focus from model disagreements to data integrity. Instead of expecting models trained on random data to magically become reliable, companies should invest in building proprietary, domain-specific datasets with verified accuracy. Prioritize deterministic systems over probabilistic ones where possible.

Your Hyperchat tool proves this - these models contradict each other because they're all working from disparate and often unreliable training data. When we scrape data, it takes an incredible amount of post-processing.

We've seen this before. In derivatives, everyone chased the next hot pricing model, but clean data won (Bloomberg's advantage). While others obsess over model parameters and architectures, our startup is focusing on a hyper-refined, curated dataset. Plus, everything breaks when models get updated, and proprietary data assets add more enterprise value in the long term.

Expand full comment