upvote
I thought the page was a hilarious joke, not a bad prediction. A lot of these are fantastic observational humour about HN and tech. Gary Marcus still insisting AI progress is stalling 10 years from now, for example. Several digs at language rewrites. ITER hardly having nudged forwards. Google killing another service. And so on.
reply
Wait, wouldn't sustained net positive energy be huge? (Though I don't think that's actually possible from ITER unless there were some serious upgrades over the next decade!)
reply
It would be huge, but only 20 minutes would also still mean it's still far away from making fusion workable, so it fits neatly into the standard joke that fusion is perpetually 10 years away.
reply
True I suppose, though I also expect we're considerably more than 10 years away from 20 minutes of overall net positive output!
reply
I totally agree that it was a funny joke.

But I've noticed that a lot of people think of LLM's as being _good_ at predicting the future and that's what I find concerning.

reply
That's a valid concern about the number of people who think people are good at predicting the future too.

(I'll make my prediction: 10 years from now, most things will be more similar to what things are today than most people expected them to be)

reply
Does the prompt say anything about being funny, about a joke? If yes, great. If no, terrible.

And the answer is no.

reply
The prompt is funny, in itself. The notion of predicting the future is itself not a serious prompt, because there is no meaningful way of giving a serious response. But the addition of "Writ it into form!" makes it sound even more jokey.

If I gave a prompt like that and got the response I did, I'd be very pleased with the result. If I somehow intended something serious, I'd have a second look at the prompt, go mea culpa, and write a far longer prompt with parameters to make something somewhat like a serious prediction possible.

reply
If you honestly can't see why this prompt from the get go was a joke, them you may have to cede that LLM have a better grasp as the subtleties of language than you expect.
reply
That's what makes this so funny: the AI was earnestly attempting to predict the future, but it's so bad at truly out-of-distribution predictions that an AI-generated 2035 HN frontpage is hilariously stuck in the past. "The more things change, the more they stay the same" is a source of great amusement to us, but deliberately capitalizing on this was certainly not the "intent" of the AI.
reply
I don’t think it’s reasonable to assume the AI was earnestly attempting to predict the future, it’s just as likely attempting to make jokes here for the user who prompted it, or neither of those things.
reply
Apparently it views HN as a humorous website, and made a comical response to the prompt.
reply
I wouldn’t go that far
reply
There is just no reason whatsoever to believe this is someone "earnestly attempting to predict the future", and ending up with this.
reply
There's no chance "google kills gemini cloud" was an earnest predication. That was 100% a joke
reply
>It’s interesting to notice how bad AI is at gaming out a 10-year future.

I agree it's a bit silly, but I think it understood the assignment(TM) which was to kind of do a winking performative show and dance to the satisfaction of the user interacting with it. It's entertainment value rather than sincere prediction. Every single entry is showing off a "look how futury this is" headline.

Actual HN would have plenty of posts lateral from any future signalling. Today's front page has Oliver Sacks, retrospectives on Warcraft II, opinion pieces on boutique topics. They aren't all "look at how future-y the future is" posts. I wonder if media literacy is the right word for understanding when an LLM is playing to its audience rather than sincerely imitating or predicting.

reply
Also, many of the posts seemed intended to be humorous and satirical, rather than merely 'futury.' They made me laugh anyway.

> Google kills Gemini Cloud Services

> Running LLaMA-12 7B on a contact lens with WASM

> Is it time to rewrite sudo in Zig?

> Show HN: A text editor that doesn't use AI

reply
I walked away with that page open, glanced at the "Is it time to rewrite sudo in Zig?" post, and clicked to see the comments because I thought it was real :')
reply
A while back I gave it a prompt, something like, "I'm a historian from the far future. Please give me a documentary-style summary of the important political and cultural events of the decade of the 1980s."

It did ok, then I kept asking for "Now, the 1990s?" and kept going into future decades. "Now, the 2050s?" It made some fun extrapolations.

reply
Assuming it was through the chatgpt interface, you can share an anonymized link to the chat if you want to show it off (I'd certainly be curious).
reply
I guess most of the articles it generated are snarky first and prediction next. Like google cancelling gemini cloud, Tailscale for space, Nia W36 being very similar to recent launch etc.
reply
> Tailscale for space

Technically the article was about running it not on a sat, but on a dish (something well within the realm of possibility this year if the router firmware on the darn things could be modified at all)

reply
Yep, the original post seemed more snarky than anything, which was what prompted me to ask Claude my own more “sincere” question about its predictions.

Those predictions were what I think of as a reflection of current reality more than any kind of advanced reasoning about the future.

reply
While I agree completely with the conclusion, for obvious reasons we can’t know for sure if it is correct about the future until we reach it. Perhaps asking it for wild ideas rather than ”most likely” would create something more surprising.
reply
> It isn’t trained to consider second-order effects.

Well said. There's precious little of that in the human writings that we gave it.

reply
I think the average human would do a far worse job at predicting what the HN homepage will look like in 10 years.
reply