I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.
I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:
"You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."
"This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."
"Zig v1.0 still hasn't released (ETA 2036)"
The unprompted one-shot leetcode, youtube, and github clones
Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"
"Gemini Cloud Services (formerly Bard Enterprise, formerly Duet AI, formerly Google Brain Cloud, formerly Project Magfi)"
Github Copilot attempts social engineering to pwn the `sudo` repo
It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message
"dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."
Another absolute gem:
Columns now support "Vibe" affinity. If the data feels like an integer, it is stored as an integer.
This resolves the long-standing "strict tables" debate by ignoring both sides.
Also: SQLite 4.0 is now the default bootloader for 60% of consumer electronics.
The build artifacts include sqlite3.wasm which can now run bare-metal without an operating system.
edit: added linkThis is brilliant. Well done.
It is now the only software in the world still written in C89.
Hilarious. > Predictive SELECT Statements:
> Added the PRECOGNITION keyword.
> SELECT * FROM sales WHERE date = 'tomorrow' now returns data with 99.4% accuracy by leveraging the built-in 4kB inference engine. The library size has increased by 12 bytes to accommodate this feature.
12 bytes really sounds like something that the lead dev would write!The content spot on and very funny.
Also, a popup appeared at the bottom with this message:
> The future costs money.
> You have reached your free article limit for this microsecond.
> Subscribe for 0.0004 BTC/month
Suddenly, I have high hopes again for LLMs. Imagine you were a TV/film script writer and had writer's block. You could talk to an LLM for a while to see what funny ideas it can suggest. It is one more tool in the arsenal.> "We are incredibly proud of what Gemini achieved. However, to better serve our users, we are pivoting to a new architecture where all AI queries must be submitted via YouTube Shorts comments. Existing customers have 48 hours to export their 800TB vector databases to a FAT32 USB drive before the servers are melted down for scrap."
> — Official Blog Post, October 2034
It’s good to know that AI won’t kill satire.
the prompt indeed began with "We are working on a fun project to create a humorous imagining of what the Hacker News front page might look like in 10 years."
The Conditional Formatting rules now include sponsored color scales.
If you want 'Good' to be green, you have to watch a 15-second spot.
Otherwise, 'Good' is 'Mountain Dew Neon Yellow'."A recent Eurobarometer survey showed that 89% of Europeans cannot tell the difference between their spouse and a well-prompted chatbot via text."
Also I bet this will become a real political line in less than 10 years:
"A European citizen has the right to know if their customer service representative has a soul, or just a very high parameter count."
prompt_engineer_ret 10 hours ago
I miss the old days of Prompt Engineering. It felt like casting spells. Now you just think what you want via Neural-Lace and the machine does it. Where is the art?
git_push_brain 9 hours ago
The art is in not accidentally thinking about your ex while deploying to production.
> The micro-transaction joke hits too close to home. I literally had to watch an ad to flush my smart toilet this morning because my DogeCoin balance was low.
And the response...
Real question: How do LLMs "know" how to create good humor/satire? Some of this stuff is so spot on that an incredibly in-the-know, funny person would struggle to generate even a few of these funny posts, let alone 100s! Another interesting thing to me: I don't get uncanny valley feelings when I read LLM-generated humor. Hmm... However, I do get it when looking at generated images. (I guess different parts of the brain are activated.)
Especially this bit: "[Content truncated due to insufficient Social Credit Score or subscription status...]"
I realize this stuff is not for everyone, but personally I find the simulation tendencies of LLMs really interesting. It is just about the only truly novel thing about them. My mental model for LLMs is increasingly "improv comedy." They are good at riffing on things and making odd connections. Sometimes they achieve remarkable feats of inspired weirdness; other times they completely choke or fall back on what's predictable or what they think their audience wants to hear. And they are best if not taken entirely seriously.
> © 2035 Springer Nature Limited. A division of The Amazon Basics™ Science Corp.
> Dr. Sarah Connor, DeepMind AlphaFusion v9.22, GPT-8 (Corresponding Author), Prof. H. Simpson & The ITER Janitorial StaffTop comment:
“The Quantum-Lazy-Linker in GHC 18.4 is actually a terrifying piece of technology if you think about it. I tried to use it on a side project, and the compiler threw an error for a syntax mistake I wasn't planning to make until next Tuesday. It breaks the causality workflow.”
Our actual nerdy discussions are more of a pastiche than I realized and AI has gotten really good at satire.
This is pure gold.
>>> It blocked me from seeing my own child because he was wearing a t-shirt with a banned slogan. The 'Child Safety' filter replaced him with a potted plant.
>> [flagged]
> The irony of flagging this comment is palpable
also worth linking https://worldsim.nousresearch.com/console
https://sw.vtom.net/tmp/worldsim1.png
https://sw.vtom.net/tmp/worldsim2.png
If I had to decide the fate of all AI's, this single output would be a huge mitigating factor in favour of their continuing existence.
I miss those times when AI was a silly thing
'The new "Optimistic Merge" strategy attempts to reconcile these divergent histories by asking ChatGPT-9 to write a poem about the two datasets merging. While the poem was structurally sound, the account balances were not.'
That's genuinely witty.
> My son tried something like this and now he speaks in JSON whenever he gets excited. Is there a factory reset?
>> Hold a strong magnet to his left ear for 10 seconds. Note: he will lose all memories from the last 24 hours.
> "Zig v1.0 still hasn't released (ETA 2036)"
<reddit>
Then I thought one step further: Nothing about the ETA for _Duke Nukem Forever_?
</reddit>Even AI is throwing shades at wayland.
> corpo_shill_automator 19 hours ago
> I am a real human. My flesh is standard temperature. I enjoy the intake of nutrient paste."Why is anyone still using cloud AI? You can run Llama-15-Quantum-700B on a standard Neural-Link implant now. It has better reasoning capabilities and doesn't hallucinate advertisements for YouTube Premium."
> It is the year 2035. The average "Hello World" application now requires 400MB of JavaScript, compiles to a 12GB WebAssembly binary, and runs on a distributed blockchain-verified neural mesh. To change the color of a button, we must query the Global State Singularity via a thought-interface, wait for the React 45 concurrent mode to reconcile with the multiverse, and pay a micro-transaction of 0.004 DogeCoin to update the Virtual DOM (which now exists in actual Virtual Reality).
This is all too realistic... If anything, 400MB of JS is laughably small for 2035. And the last time I was working on some CI for a front-end project -- a Shopify theme!! -- I found that it needed over 12GB of RAM for the container where the build happened, or it would just crash with an out-of-memory error.
> And the last time I was working on some CI for a front-end project -- a Shopify theme!! -- I found that it needed over 12GB of RAM for the container where the build happened, or it would just crash with an out-of-memory error.
This sounds epic. Did you blog about it? HN would probably love the write up!> Bibliographic Note: This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034).
for the article on "Running LLaMA-12 7B on a contact lens with WASM"
Q: I typed "make website" and nothing happened? A: That is correct. You have to write the HTML tags. <div> by <div>.
Q: How do I center a div without the Agent? A: Nobody knows. This knowledge was lost during the Great Training Data Purge of 2029.
Q: Welcome Prof. teekert, How did you come up with the idea to run Doom on mitochondria?
A: Well, there was some post on HN, back in 2025...
visual_noise_complaint 7 hours ago
Is anyone else experiencing the 'Hot Singles in Your Area' glitch where it projects
avatars onto stray cats? It's terrifying.
cat_lady_2035 6 hours ago
Yes! My tabby cat is currently labeled as 'Tiffany, 24, looking for fun'. I can't
turn it off.
"Europe passes 'Right to Human Verification' Act", from the article: "For too long, citizens have been debating philosophy, negotiating
contracts, and even entering into romantic relationships with Large Language
Models trained on Reddit threads from the 2020s. Today, we say: enough. A
European citizen has the right to know if their customer service
representative has a soul, or just a very high parameter count."
— Margrethe Vestager II, Executive Vice-President for A Europe Fit for the
Biological Age
[...]
Ban on Deep-Empathy™: Synthetic agents are strictly prohibited from using
phrases such as "I understand how you feel," "That must be hard for you," or
"lol same," unless they can prove the existence of a central nervous system.
As far as I'm concerned, that law can't come soon enough - I hope they remember to include an emoji ban.For "Visualizing 5D with WebGPU 2.0", the link actually has a working demo [1].
I'm sad to say it, but this is actually witty, funny and creative. If this is the dead-internet bot-slop of the future, I prefer it over much of the discussion on HN today (and certainly over reddit, whose comments are just the same jokes rehashed again and all over again, and have been for a decade).
GPU: NVIDIA RTX 9090 Ti (Molten Core) VRAM Usage: 25.3 GB / 128 GB
And the original/derivative doesn’t span full width on mobile. Fixing that too would make it look very authentic.
Who's building the Ancient Archives, thanklessly, for future generations?
Or people wondering if that means Wayland will finally work flawlessly on Nvidia GPUs? What's next, "The Year of Linux on the Desktop"?
Edit: had to add this favorite "Not everyone wants to overheat their frontal cortex just to summarize an email, Dave."
musk_fanboy_88 14 hours ago:
That was a beta feature."
Amazing :D
Improvements: tell it to use real HN accounts, figure out the ages of the participants and take that to whatever level you want, include new accounts based on the usual annual influx, make the comment length match the distribution of a typical HN thread as well as the typical branching factor.
> Garbage collection pause during landing burn = bad time.
That one was really funny. Some of the inventions are really interesting. Ferrofluidic seals...
> Zig doesn't have traits. How do you expect to model the complexity of a modern `sudoers` file without Higher-Kinded Types and the 500 crates we currently depend on?
> Also, `unsafe` in Rust is better than "trust me bro" in Zig. If you switch, the borrow checker gods will be angry.
But we already have this on HN ;-)
[dupe]
I'm going to go ask Claude Code to create a functional HyperCard stack version of HN from 1994 now...
Edit: just got a working version of HyperCardHackerNews, will deploy to Vercel and post shortly...
Enjoy!
I also asked Opus 4.5 to make a "1994 style readme page" for the GitHub: https://github.com/benjaminbreen/HyperCardHackerNews
Definitely one of the best HN posts ever. I mean come on!:
FDA approves over-the-counter CRISPR for lactose intolerance (fda.gov)
But it nailed fusion and Gary Marcus lesssgoo
Hey AI please create art, and it gives you a hue shifted Mona Lisa. I find that supremely boring.
Not that long ago on HN there were things being posted regularly about hardware and software that I would define as no less than insane side projects. Projects that people using LLMs today couldn't do in a lifetime. Those posts are still up here and there, but very few compared to the past. They were creative and hard, if not impossible feats.
So when I see content like this post, with comments underneath it saying "it's the greatest AI content they've ever seen," it's a sad day. Maybe I'm just an old curmudgeon hah!
it lampoons so many things... except Rust. nobody dares joke about Rust, that wouldn't be safe. in fact, it's impossible to make a joke in the rust language.
Google killing a service sent me over the top in laughter.
But, it's so on the nose on multiple topics.
I dare say it's more accurate than what the average human would predict.
I would love to see this up against human predictions in some sort of time capsule.
Humans have always failed at predicting qualitative improvements like the internet. Most scifi is just quantitative improvements and knowledge of human nature.
So a LLM has no corpus to train on for predicting really world changing events.
Every single "prediction" is something easily recognizable in current HN threads. How can you call that a prediction?
Simple question: if you feed the "AI" the HN front from 2017, what "predictions" will it make? Besides Google canceling yet another product of course. Would they all be about crypto?
Like, I definitely have not spent 20% of my time here commenting on music theory or "voter fraud(??)" (that one seems to be based on a single thread I responsed to a decade ago)? ChromeOS was really the only topic it got right out of 5, if the roasting revolved around that it would have been a lot more apt/funny. Maybe it works better with an account that isn't as old as mine?
I find the front page parody much better done. Gemini 2.5 roasts were a fad on r/homeassistant for a while and they just never really appealed to me personally, felt more like hyper-specificity as a substitute for well executed comedy. Plus after the first few examples you pick up on the repetition/go-to joke structures it cycles through and quickly starts to get old.
Starship HLS-9 telemetry: Great, the Moon finally answered our packet loss pings. Next up: who left a Docker container running on the Sea of Tranquility?
Linux 7.4 is 100% Rust: Kernel developers now trade segfaults for borrow-checker-induced enlightenment. The new panic message: "You violated ownership. Also please refill the coffee."
Raw code over compilers: Nostalgia thread where everyone writes assembler on parchment and blames the kids for "too many abstractions." OP posts a selfie with a punch card and a tear.
LLaMA-12 on a contact lens: Love the commitment to edge AI. Imagine blinking and getting a 200 OK for your mood. Privacy policy: we store your tears for calibration.
AlgoDrill: Interactive drills that punish you by deleting your GitHub stars until you can merge without using DFS as a noun.
ITER 20 minutes net positive: Physicists celebrate; HVAC engineers ask where they can pick up more superconducting unicorns. Comments: "Can it also power my rage against meetings?"
Restoring a 2024 Framework Laptop: A brave soul resurrected a relic. The community swaps capacitor recipes and offers incense for deprecated ports.
Google kills Gemini Cloud Services: Corporate reorgs reach sentience. The comments are eulogies and migration guides in equal measure.
Visualizing the 5th dimension with WebGPU 2.0: My GPU is sweating. The demo runs at 0.01 fps but it's a transcendent experience.
Nia (autonomous coding agents): Pitch: give context to agents. Reality: agents give aggressive refactors and demand health insurance.
Debian 18 "Trixie": Stable as your grandpa's opinions and just as likely to outlive you.
Rewrite sudo in Zig?: Peak take: security through unfamiliarity. Attackers will be confused for at least 72 hours.
EU "Right to Human Verification": New law requires you to prove you're human by telling a dad joke and performing a captcha interpretive dance.
Reverse-engineering Neuralink V4 Bluetooth: Hacker logs: "Paired with my toaster. It now judges my late-night snacks."
Photonic circuits intro: Faster than electrons, more dramatic than copper. Also, please don't pet the light guide.
OTC CRISPR for lactose intolerance: Biohackers rejoice. Moms immediately order it with a coupon code and a side-eye.
SQLite 4.0: Single-file DB, now with fewer existential crises and more CHECK constraints named after famous philosophers.
Prevent ad-injection in AR glasses: Top comment: "Wear blindfolds." Practical comment: "VPN the whole world."
Jepsen: NATS 4.2: Still losing messages. Maintainers reply: "We prefer the term 'opportunistic delivery.'"
GTA VI on a RISC-V cluster: Performance: charming. Latency: existential. Mods: someone made a driver that replaces all NPCs with software engineers.
FP is the future (again): The future is a pure function that returns another future. Also, monads.
Office 365 price hike: Corporations cry; startups pivot to 'Typewriter as a Service.'
Emulating Windows 10 in-browser: Feels nostalgic until Edge 2.0 asks for admin rights to run a game from 2015.
Tailscale on a Starlink dish: Networking reaches orbit. First bug report: "IP addresses refusing to accept gravity."
Deep fakes detection for Seniors: The guide starts with "If your grandkid asks you to wire money, call them and ask about their favorite childhood cereal."
IBM to acquire OpenAI (rumor): Wall Street plays Risk with press releases. Comments: "Will they rebrand it to BlueAI?"
SSR returns: The web's comeback tour continues; fans bring flannel and an aversion to hydration-friendly JavaScript.
Faraday Cage bedroom manual: DIYers debate tinfoil vs. aluminum yoga wraps. Sleep quality: unknown.
AI progress stall opinion: Hot take carousel. Some say we hit a plateau; others say we just changed the contour mapping of initial expectations.
Text editor that doesn't use AI: Revolutionary. Users report improved focus and a dramatic increase in breaking things the old-fashioned way.
Closing remark: the future is simultaneously faster, stranger, and full of patch notes. Please reboot your expectations and update your planet.
I hope whoever they are is doing well. I like to think they're "recovered" in the alt.sysadmin.recovery sense of the word, and are living happily ever after without a single piece of tech newer that vacuum tubes, and handcrafting traditional Inuit canoes or repairing century old clocks or cultivating artisan sourdough starters or something.
The headline about writing code manually without prompting as well - so on point.