upvote
> like entity recognition

As someone who has done traditional NLP work as at least part of my job for the last 15 years, LLMs do ofter a vastly superior NER solution over any previous NLP options.

I agree with your overall statement, that frequently people rush to grab an LLM when superior options already exist (classification is a big example, especially when the power of embeddings can be leveraged), but NER is absolutely a case where LLMs are the superior option (unless you have latency/cost requirements to force you to choose and inferior quality as the trade off, but your default should be an LLM today).

reply
I agree! I used 'symbolic AI' for NLP starting in the early 1980s. Everything back then was so brittle, and very labor intensive.
reply
Oh 100%! There are many problems (including this one!) that probably aren't best suited for an LLM. I was just trying to pick a really simple example that most people would follow.
reply
Is there a non-tranformer based entity extraction solution that's not brittle? My understanding is that the cutting edge in entity extraction (e.g. spaCy) is just small BERT models, which rock for certain things, but don't have the world knowledge to handle typos / misspellings etc.
reply
but then u run into edge cases with indirect references and entity recognition models arent smart enough to deal with them, and bitter lesson hits you again.
reply
the bitter lesson comes for us all, unfortunately!
reply
I don't think you realize how bad NLP was prior to transformers. Oldschool entity recognition was extremely brittle to the point that it basically didn't work.

CV too for that matter, object recognition before deep learning required a white background and consistent angles. Remember this XKCD from only 2014? https://xkcd.com/1425/

reply
CV is a space where I would 100% agree with you. But - edge cases notwithstanding - there's not so much of a dropoff with NER that I would first go to an LLM.
reply