upvote
How do you think those models get trained? You can only get so far with Wikipedia, Reddit, and non-fiction works like books and academic papers.
reply
Have a look at this article: https://www.washingtonpost.com/technology/interactive/2023/a...

NY Times is 0.06% of common crawl.

These news media outlets provide a drop in the ocean worth of information. Both qualitatively and quantitatively.

The news / media industry is really just trying to hold on to their lifeboat before inevitably becoming entirely irrelevant.

(I do find this sad, but it is like the reality - I can already now get considerably better journalism using LLMs than actual journalists - both click bait stuff and high quality stuff)

reply
That seems like a reductive way to consider it. What percent of music was created by Led Zeppelin? What percent of art was painted by Monet? What percent of films by Alfred Hitchcock? It may be a small percentage objectively but they are hugely influential.
reply
I don't think back propagation care whose text it is back propagating.
reply
The data sets aren't naively fed into the training runs.

Instead, training attempts to sample more heavily from higher quality sources, with, I'm sure, a mix of manual and heuristic labeling.

reply
fwiw, no llm ive ever used generated in the writing style newspapers and -sites use - hence i honestly doubt they've been given a meaningful boost in relevancy.

their idioms would leak occasionally otherwise

reply
90% of common crawl is complete junk. While the tiny bit of news articles powers almost all the ai answers in Google search.
reply
How many Reddit, HN, etc. posts are based on NYT articles? How many derivative news articles, blog posts, YouTube videos, TikToks, etc. are responses to those articles?

At least NYT is probably on the correct side of Sturgeon’s Law: https://en.wikipedia.org/wiki/Sturgeon%27s_law

reply
> How many Reddit, HN, etc. posts are based on NYT articles? How many derivative news articles, blog posts, YouTube videos, TikToks, etc. are responses to those articles?

You may get an inconvenient answer when you ask the question the other way around.

reply
0.06% is way higher than I would expect
reply
How does the entire textual corpus of say, new York times compare to all novels? Each article is a page of text, maybe two at most? There certainly are an awful lot of articles. But it's hard to imagine it is much more than a couple hundred novels. There must be thousands of novels released each year
reply
Like apples to oranges.

LLMs are (apparently) massively used to get information about topics in the real world. Novels aren't going to be much help there. Journalism, particularly in written form, provides a fount of facts presented from different angles, as well as opinions, and it was all there free for the taking…

Wikipedia provides the scantest summary of that, fora and social media give you banter, fake news, summaries of news, and a whole lot of shaky opinions, at best. Novels give you the foundations of language, but in terms of knowledge nothing much beyond what the novel is about.

reply
LLMs can get up to date information from primary sources - no journalists required.
reply
I don't understand how LLMs can ask questions at a press conference.
reply
To begin with, your premise is that the only primary sources are press conferences and that press conferences only provide information in response to questions.

But even taking it literally, isn't that one of the things LLMs could actually do? You're essentially asking how a text generator could generate text. The real question is whether the questions would be any good, but the answer isn't necessarily no.

reply
Startup idea right there.
reply
I don't think an LLM can have secret human sources that provide them with confidential information anonymously. Not all news shows up on Twitter.
reply
You don't need the secret human sources any more.

You used to need them, because journalists had the distribution and the sources didn't. In a word of printed newspapers, you couldn't get your story distributed nationally (much less worldwide) without the help of a journalist, doubly so if you wanted to stay anonymous.

Nowadays, you just make a Substack and there's that.

See that recent expose on the Delve fraud as just one example. No journalists were harmed in the making of that article.

reply
The primary source for most news is journalism.
reply
In context, primary source means the subject of the article (the thing the journalist is writing about).

Journalism is by definition a secondary source. (Notwithstanding edge cases like articles reporting directly on the news industry itself.)

reply
Journalism is absolutely not by definiton a secondary source.

If a journalist is on location covering a flood, for example, they are the primary source.

A journalist conducting an interview would also be a primary source.

reply
Primary sources can and often are, very biased. Journalists are (supposed to be) doing fact checks and gathering multiple sources from all sides. Modern journalism is in a terrible state, but still important.

Imagine if all info about Facebook came from Facebook...

reply
Isn't the non-LLM generated text becoming more valuable for training as the web at large is flooded with slop?

Preventing new human generated text from being used by AI firms (without consent) seems like a valid strategy.

reply
No.

Modern LLMs are trained on a large percentage of synthetic data.

This sentiment is largely legacy (even though just a couple of years old).

reply