Nobody here is at fault, we're in very trying times - we need to adjust with patience and consideration.
Use of AI to launch rapid prototypes is like breadboarding a new product. It has a place but it's moving so fast that it's hard to lock down at the moment.
No point everyone throwing excess cortisol in this direction. <3
If it wasn't clear, I think we're (as a society) destroying ourselves by believing in all this generative AI crap, even contrary to the evidence of how wrong it often is, the hallucinations, the awful quality etc.
I think we're witnessing the death of intellect: when you discard the evidence in favor of something that only looks right but is nonsense, there's no telling where it will end. If your profession requires you to think and produce output accordingly, but suddenly nobody thinks wrong answers matter, then your profession no longer exists.
Standing up against it and refusing to accept any form of AI anywhere is the only reasonable thing to do. And I don't know if it will make a difference.
It's only slop because anyone can make it now and we're all sick of clones.
The app is good, but the effort required to make it is not impressive at all. I think calling this slop is a misnomer. It's not slop. It's better than what most of us can do and done in a significantly faster amount of time. Calling it slop implies you can do better... which you can't.
Not saying the AI slop noise isn’t annoying though.
If you want "feedback" of the same quality and effort as the project itself, you can always go ask your beloved AI for feedback instead of wasting precious human time.
If I’m driving an AI towards finding a solution, would it be any different for a software project?
Never mind the fact that AIs of the LLM-variety haven't and aren't going to find solutions to mathematical problems.
This is empirically wrong as of early 2026.
Since Christmas 2025, 15 Erdos problems have been moved from "open" to "solved" on erdosproblems.com, 11 of them crediting AI models. Problems #397, #728, and #729 were solved by GPT-5.2 Pro generating original arguments (not literature lookups), formalized in Lean, and verified by Terence Tao himself. Problem #1026 was solved more or less autonomously by Harmonic's Aristotle model in Lean.
At IMO 2025, three separate systems (Gemini Deep Think, an OpenAI system, and Aristotle) independently achieved gold-medal performance, solving 5 of 6 problems.
DeepSeek-Prover-V2 hits 88.9% on MiniF2F-test. Top models solve 40% of postdoc-level problems on FrontierMath, up from 2%.
Tao's own assessment as of March 2026: AI is "ready for primetime" in math and theoretical physics because it "saves more time than it wastes."
You can disagree about where this is heading, but "haven't and aren't going to" doesn't survive contact with the data.
So, not autonomously.
Also how does getting into the specifics of which type of AI can solve mathematical problems helps the comparison here?
If you think you made "cool stuff" with AI, great, enjoy it, but also please keep it to yourself because anyone else can generate the exact same thing if they want it, you are not special, and are actively downing out real human effort and passion.
performance is easy. you can craft a test suite that will allow a ralph loop to iterate until it hits the metrics.
the hard part of style/feel/usability. LLMs still suck at that stuff, and crafting tests to produce those metrics is nigh impossible.