upvote
> We had good small language models for decades. (E.g. BERT)

BERT isn’t a SLM, and the original was released in 2018.

The whole new era kicked off with Attention Is All You Need; we haven’t reached even a single decade of work on it.

reply
> BERT isn’t a SLM

Huh? BERT is literally a language model that's small and uses attention.

And we had good language models before BERT too.

They were a royal bitch to train properly, though. Nowadays you can get the same with just 30 minutes of prompt engineering.

reply
> > BERT isn’t a SLM Huh? BERT is literally a language model that's small and uses attention.

Astute readers will note what’s been missed here.

Fascinating, really. Your confidently-statement yet factually void comments I’d have previously put down to one of the classic programmer mindsets. Nowadays though - where do I see that kind of thing most often? Curious.

reply
After some research, I think I understand what you're getting at here - BERT being a model for encoding text but not architecturally feasible to generate text with it, which "LLMs" (the lack of definition here is resulting in you two talking past eachother), maybe more accurately referred to as GPTs, can do.

Also the irony of your comment when it in itself was confidently stated yet void of any content was not missed either - consider dropping the superiority complex next time.

reply
You can actually generate surprisingly coherent text with minimal finetuning of BERT, by reinterpreting it as a diffusion model: https://nathan.rs/posts/roberta-diffusion/

I don’t see a useful definition of LLM that doesn’t include BERT, especially given its historical importance. 340M parameters is only “small” in the sense that a baby whale is small.

reply
For context, BERT is encoder-only, vs SLMs and LLMs which are decoder-only, and BERT is very much not about generating text, it’s a completely different tech and purpose behind it. I believe some multimodal variants nowadays may muddy the waters slightly, but fundamentally they’re very different things, let alone around been around for decades unless also including the history of computing in general.

While I could’ve written that better and with less attitude, gotta confess - and thx for pointing out my smugness - the AI stuff of the last few weeks really got under my skin, think I’m feeling all rather fatigued about it

reply
BERT is one example of a language model that solved specific language tasks very well and that existed before LLM's.

We had very good language models for decades. The problem was they needed to be trained, which LLM's mostly don't. You can solve a language model problem now with just some system prompt manipulation.

(And honestly typing in system prompts by hand feels like a task that should definitely be automated. I'm waiting for "soft prompting" be become a thing so we can come full circle and just feed the LLM with an example set.)

reply
> Astute readers will note what’s been missed here.

I’m not astute enough to see what was missed here. Could you explain?

reply
> The entire point of LLMs is that you don't have to spend money training them for each specific case.

I don’t agree. I would say the entire point of LLMs is to be able to solve a certain class of non-deterministic problems that cannot be solved with deterministic procedural code. LLMs don’t need to be generally useful in order to be useful for specific business use cases. I as a programmer would be very happy to have a local coding agent like Claude Code that can do nothing but write code in my chosen programming language or framework, instead of using a general model like Opus, if it could be hyper-specialized and optimized for that one task, so that it is small enough to run on my MacBook. I don’t need the other general reasoning capabilities of Opus.

reply
Why would you think a system that can reason well in one domain could not reason well in other domains? Intelligence is a generic, on-the-fly programmable quality. And perhaps your coding is different from mine, but it includes a great deal of general reasoning, going from formal statements to informal understandings and back until I get a formalization that will solve the actual real world problem as constrained.
reply
> I don’t agree. I would say the entire point of LLMs is to be able to solve a certain class of non-deterministic problems that cannot be solved with deterministic procedural code

You are confusing LLMs with more general machine learning here. We've been solving those non-deterministic problems with machine learning for decades (for example, tasks like image recognition). LLMs are specifically about scaling that up and generalising it to solve any problem.

reply