upvote
I asked it to design a submarine for my cat and literally the instant my finger touched return the answer was there. And that is factoring in the round-trip time for the data too. Crazy.

The answer wasn't dumb like others are getting. It was pretty comprehensive and useful.

  While the idea of a feline submarine is adorable, please be aware that building a real submarine requires significant expertise, specialized equipment, and resources.
reply
it's incredible how many people are commenting here without having read the article. they completely lost the point.
reply
With this speed, you can keep looping and generating code until it passes all tests. If you have tests.

Generate lots of solutions and mix and match. This allows a new way to look at LLMs.

reply
Not just looping, you could do a parallel graph search of the solution-space until you hit one that works.
reply
Infinite Monkey Theory just reached its peak
reply
You could also parse prompts into an AST, run inference, run evals, then optimise the prompts with something like a genetic algorithm.
reply
And then it's slow again to finally find a correct answer...
reply
It won't find the correct answer. Garbage in, garbage out.
reply
This doesn't work. The model outputs the most probable tokens. Running it again and asking for less probable tokens just results in the same but with more errors.
reply
Do you not have experience with agents solving problems? They already successfully do this. They try different things until they get a solution.
reply
Agreed, this is exciting, and has me thinking about completely different orchestrator patterns. You could begin to approach the solution space much more like a traditional optimization strategy such as CMA-ES. Rather than expect the first answer to be correct, you diverge wildly before converging.
reply
This is what people already do with “ralph” loops using the top coding models. It’s slow relative to this, but still very fast compared to hand-coding.
reply
OK investors, time to pull out of OpenAI and move all your money to ChatJimmy.
reply
A related argument I raised a few days back on HN:

What's the moat with with these giant data-centers that are being built with 100's of billions of dollars on nvidia chips?

If such chips can be built so easily, and offer this insane level of performance at 10x efficiency, then one thing is 100% sure: more such startups are coming... and with that, an entire new ecosystem.

reply
RAM hoarding is, AFAICT, the moat.
reply
lol... true that for now though
reply
Yeah, just cause Cisco had a huge market lead on telecom in the late '90s, it doesn't mean they kept it.

(And people nowadays: "Who's Cisco?")

reply
You'd still need those giant data centers for training new frontier models. These Taalas chips, if they work, seem to do the job of inference well, but training will still require general purpose GPU compute
reply
Next up: wire up a specialized chip to run the training loop of a specific architecture.
reply
I think their hope is that they’ll have the “brand name” and expertise to have a good head start when real inference hardware comes out. It does seem very strange, though, to have all these massive infrastructure investment on what is ultimately going to be useless prototyping hardware.
reply
Tools like openclaw start making the models a commodity.

I need some smarts to route my question to the correct model. I wont care which that is. Selling commodities is notorious for slow and steady growth.

reply
Nvidia bought all the capacity so their competitors can't be manufactured at scale.
reply
If I am not mistaken this chip was build specifically for the llama 8b model. Nvidia chips are general purpose.
reply
You mean Nvidia?
reply
I dunno, it pretty quickly got stuck; the "attach file" didn't seem to work, and when I asked "can you see the attachment" it replied to my first message rather than my question.
reply
It’s llama 3.1 8B. No vision, not smart. It’s just a technical demo.
reply
why is everyone seemingly incapable of understanding this? waht is going on here? Its like ai doomers consistently have the foresight of a rat. yeah no shit it sucks its running llama 3 8b, but theyre completely incapable of extrapolation.
reply
Hmm.. I had tried simple chat converation without file attachments.
reply
I got 16.000 tokens per second ahaha
reply
I get nothing, no replies to anything.
reply
Maybe hn and reddit crowd have overloaded them lol
reply
That… what…
reply
Well it got all 10 incorrect when I asked for top 10 catchphrases from a character in Plato's books. It confused the baddie for Socrates.
reply
Fast, but stupid.

   Me: "How many r's in strawberry?"

   Jimmy: There are 2 r's in "strawberry".

   Generated in 0.001s • 17,825 tok/s
The question is not about how fast it is. The real question(s) are:

   1. How is this worth it over diffusion LLMs (No mention of diffusion LLMs at all in this thread)
(This also assumes that diffusion LLMs will get faster)

   2. Will Talaas also work with reasoning models, especially those that are beyond 100B parameters and with the output being correct? 

   3. How long will it take to create newer models to be turned into silicon? (This industry moves faster than Talaas.)

   4. How does this work when one needs to fine-tune the model, but still benefit from the speed advantages?
reply
The blog answers all those questions. It says they're working on fabbing a reasoning model this summer. It also says how long they think they need to fab new models, and that the chips support LoRAs and tweaking context window size.

I don't get these posts about ChatJimmy's intelligence. It's a heavily quantized Llama 3, using a custom quantization scheme because that was state of the art when they started. They claim they can update quickly (so I wonder why they didn't wait a few more months tbh and fab a newer model). Llama 3 wasn't very smart but so what, a lot of LLM use cases don't need smart, they need fast and cheap.

Also apparently they can run DeepSeek R1 also, and they have benchmarks for that. New models only require a couple of new masks so they're flexible.

reply
LLMs can't count. They need tool use to answer these questions accurately.
reply
[dead]
reply
I asked, “What are the newest restaurants in New York City?”

Jimmy replied with, “2022 and 2023 openings:”

0_0

reply
Well, technically it's answer is correct when you consider it's knowledge cutoff date... it just gave you a generic always right answer :)
reply
chatjimmy's trained on LLama 3.1
reply
Is super fast but also super inaccurate, I would say not even gpt-3 levels.
reply
That's because it's llama3 8b.
reply
There are a lot of people here that are completely missing the point. What is it called where you look at a point of time and judge an idea without seemingly being able to imagine 5 seconds into the future.
reply
“static evaluation”
reply
It is incredibly fast, on that I agree, but even simple queries I tried got very inaccurate answers. Which makes sense, it's essentially a trade off of how much time you give it to "think", but if it's fast to the point where it has no accuracy, I'm not sure I see the appeal.
reply
the hardwired model is Llama 3.1 8B, which is a lightweight model from two years ago. Unlike other models, it doesn't use "reasoning:" the time between question and answer is spent predicting the next tokens. It doesn't run faster because it uses less time to "think," It runs faster because its weights are hardwired into the chip rather than loaded from memory. A larger model running on a larger hardwired chip would run about as fast and get far more accurate results. That's what this proof of concept shows
reply
I see, that's very cool, that's the context I was missing, thanks a lot for explaining.
reply
If it's incredibly fast at a 2022 state of the art level of accuracy, then surely it's only a matter of time until it's incredibly fast at a 2026 level of accuracy.
reply
yeah this is mindblowing speed. imagine this with opus 4.6 or gpt 5.2. probably coming soon
reply
I'd be happy if they can run GLM 5 like that. It's amazing at coding.
reply
Why do you assume this?

I can produce total jibberish even faster, doesn’t mean I produce Einstein level thought if I slow down

reply
Better models already exist, this is just proving you can dramatically increase inference speeds / reduce inference costs.

It isn't about model capability - it's about inference hardware. Same smarts, faster.

reply
Not what he said.
reply
I think it might be pretty good for translation. Especially when fed with small chunks of the content at a time so it doesn't lose track on longer texts.
reply