upvote
"and we didn't see anything" is not justified at all.

Meta absolutely has (or at least had) a word class industry AI lab and has published a ton of great work and open source models (granted their LLM open source stuff failed to keep up with chinese models in 2024/2025 ; their other open source stuff for thins like segmentation don't get enough credit though). Yann's main role was Chief AI Scientist, not any sort of product role, and as far as I can tell he did a great job building up and leading a research group within Meta.

He deserved a lot of credit for pushing Meta to very open to publishing research and open sourcing models trained on large scale data.

Just as one example, Meta (together with NYU) just published "Beyond Language Modeling: An Exploration of Multimodal Pretraining" (https://arxiv.org/pdf/2603.03276) which has a ton of large-experiment backed insights.

Yann did seem to end up with a bit of an inflated ego, but I still consider him a great research lead. Context: I did a PhD focused on AI, and Meta's group had a similar pedigree as Google AI/Deepmind as far as places to go do an internship or go to after graduation.

reply
I wasn't criticising his scientific contribution at all, that's why I started my comment by appraising what he did.

Creating a startup has to be about a product. When you raise 1B, investors are expecting returns, not papers.

reply
> Creating a startup has to be about a product. When you raise 1B, investors are expecting returns, not papers.

Speaking of returns - Apple absolutely fucked Meta ads with the privacy controls, which trashed ad performance, revenue and share price. Meta turned things around using AI, with Yann as the lead researcher. Are you willing to give him credit for that? Revenue is now greater than pre-Apple-data-lockdown

reply
>> but he had access to many more resources in Meta, and we didn't see anything

> I wasn't criticising his scientific contribution at all, that's why I started my comment by appraising what he did.

You were criticising his output at Facebook, though, but he was in the research group at facebook, not a product group, so it seems like we did actually see lots of things?

reply
they are not expecting returns at 1B+, just for some one to pay more than they paid six months ago
reply
> There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.

That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.

> I sincerely wish we will see more competition

I really wish we don't, science isn't markets.

> Understanding world through videos

The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.

reply
Do not keep bad results in context. You have to purge them to prevent them from effecting the next output. LLMs deceptively capable, but they don’t respond like a person. You can’t count on implicit context. You can’t count on parts of the implicit context having more weight than others.
reply
Most folks get paid a lot more in a corporate job than tinkering at home - using the 'follow the money' logic it would make sense they would produce their most inspired works as 9-5 full stack engineers.

But often passion and freedom to explore are often more important than resources

reply
In an interview, Yann mentioned that one reason he left Meta was that they were very focused on LLMs and he no longer believed LLMs were the path forward to reaching AGI.
reply
> It could be a management issue, though

Or, maybe it's just hard?

reply
llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.
reply
Llama wasn’t Yann LeCun’s work and he was openly critical of LLMs, so it’s not very relevant in this context.

Source: himself https://x.com/ylecun/status/1993840625142436160 (“I never worked on any Llama.”) and a million previous reports and tweets from him.

reply
He founded FAIR and the team in Paris that ultimately worked on the early Llama versions.
reply
> My only contribution was to push for Llama 2 to be open sourced.

Quite a big contribution in practice.

reply
Sure, but I don't that's relevant in a startup with 1B VC money either. Meta can afford to (attempt to) commoditize their complement.
reply
> we didn't see anything.

Is it a troll? Even if we just ignore Llama, Meta invented and released so many foundational research and open source code. I would say that the computer vision field would be years behind if Meta didn't publish some core research like DETR or MAE.

reply
You should ignore Llama because by his own admission,

>My only contribution was to push for Llama 2 to be open sourced.

reply
He founded the team that worked on fasttext, llama and other similarly impactful projects.
reply
Did he work on those vision models?
reply
That's such a terrible take.

For a hot minute Meta had a top 3 LLM and open sourced the whole thing, even with LeCunn's reservations around the technology.

At the same time Meta spat out huge breakthroughs in:

- 3d model generation

- Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.

- A whole new class of world modeling techniques (JEPAs)

- SAM (Segment anything)

reply
> - Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.

If it was a breakthrough, why did Meta acquire Wang and his company? I'm genuinely curious.

reply
People make stupid acquisitions all of the time.
reply
Wang fits the profile of a possible successor ceo for meta. Young, hit it big early, hit the ai book early straight out of college. Obviously not woke (just look at his public statements).

Unfotunately the dude knows very little about ai or ml research. He's just another wealthy grifter.

At this point decision making at Meta is based on Zuckerberg's vibes, and i suspect the emperor has no clothes.

reply
He was suffocated by the corporate aspect Meta I suspect.
reply
I can’t reconcile this dichotomy: most of the landmark deep learning papers were developed with what, by today’s standards, were almost ridiculously small training budgets — from Transformers to dropout, and so on.

So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D

reply
Its a matter of needing more time, which is a resource even SV VCs are scared to throw around. Look at the timeline of all these advancements and how long it took

Lecun introduced backprop for deep learning back in 1989 Hinton published about contrastive divergance in next token prediction in 2002 Alexnet was 2012 Word2vec was 2013 Seq2seq was 2014 AiAYN was 2017 UnicornAI was 2019 Instructgpt was 2022

This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait

reply
If his ideas had real substance, we would have seen substantial results by now. He introduced I-JEPA in 2023, so almost three years ago at this point.

If he still hasn’t produced anything truly meaningful after all these years at Meta, when is that supposed to happen? Yann LeCun has been at Facebook/Meta since December 2013.

Your chronological sequence is interesting, but it refers to a time when the number of researchers and the amount of compute available were a tiny fraction of what they are today.

reply
Your take is brutal but spot on
reply