Have you seen these projects?
Right now, ShadowBroker is really optimized for 'blinking blip' real-time radar tracking (streaming the raw GeoJSON payload from the FastAPI backend directly to MapLibre every 60s), so we get as close to as smooth 60fps entity animations across the map.
Moving to something like Martin would be incredible for handling EVEN MORE entities if we start archiving historical flight and AIS data into a proper PostGIS database, but the trade-off of having to invalidate the vector tile cache every few seconds for live-moving targets makes it a bit overkill right now....
Great project, will be contributing!
I set that up for an agricultural project a while back.
Then again they were named after a video game character so it's probably fair.
(spoiler alert if you ever intend to play ME)
everything is open source
https://github.com/blue-monads/potato-apps/tree/master/cimpl...
i should finish but have not have time
Nothing wrong with that. Beats a boring corporate dashboard any day. Video game and similar interfaces work for a reason.
No planes etc.
No helpful output in the command window.
Seems fun but doesn't seem to be working.
Did the terminal throw any Python FastAPI errors, or did it just serve the Next.js frontend? I'm going to push an update later today to show a prominent "Backend Disconnected / Missing API Keys" warning on the UI so it doesn't just look dead. Thanks for testing it!
fastapi==0.103.1
uvicorn==0.23.2
yfinance>=0.2.40
feedparser==6.0.10
legacy-cgi==2.6.1
requests==2.31.0
apscheduler==3.10.3
pydantic==2.11.0
pydantic-settings==2.8.0
playwright>=1.58.0
beautifulsoup4>=4.12.0
sgp4>=2.22
cachetools>=5.3.0
cloudscraper>=1.2.71
reverse_geocoder>=1.5.1
lxml>=5.0
python-dotenv>=1.0
and be on python 3.13 and it should get you up and running
[1] node:internal/modules/cjs/loader:1368
[1] throw err;
[1] ^
[1]
[1] Error: Cannot find module '/home/user/shadow/start-backend.js'
[1] at Function._resolveFilename (node:internal/modules/cjs/loader:1365:15)
[1] at defaultResolveImpl (node:internal/modules/cjs/loader:1021:19)
[1] at resolveForCJSWithHooks (node:internal/modules/cjs/loader:1026:22)
[1] at Function._load (node:internal/modules/cjs/loader:1175:37)
[1] at TracingChannel.traceSync (node:diagnostics_channel:322:14)
[1] at wrapModuleLoad (node:internal/modules/cjs/loader:235:24)
[1] at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:171:5)
[1] at node:internal/main/run_main_module:36:49 {
[1] code: 'MODULE_NOT_FOUND',
[1] requireStack: []
[1] }I need a realtime OSINT dashboard for OSINT dashboards.
I wish these weekend warriors would work on a project like that someday, to see what capabilities truly take. You want to know what's happening in the world, you need to place physical sensors out there, deal with the fact that your own signals are being jammed and blocked, the things you're trying to see are also trying to hide and disguise themselves.
The attention to detail is something I've never seen replicated outside. Every time we changed or put out a new algorithm, we had to process old data with it and explain to analysts and scientists every single pixel that changed in the end product and why.
apples and oranges
Archive version...
https://web.archive.org/web/20120112012912/http://henchmansh...
Let me ask a dumb question. Can this be run on a public server (I use dreamhost) with a web interface for others to see? Or is this strictly something that gets run on a local computer?
You can throw it on a server and run it for you to see (or anyone else if you trust people or dont care about losing your free API keys) It's just a standard Next.js and FastAPI stack, and there are Dockerfiles in the repo so it should be pretty straightforward to spin up on a cheap VPS (like a DigitalOcean droplet or Hetzner).
Honestly, if you just want to show it off to a few people, running it locally and exposing it with a Cloudflare Tunnel or Ngrok is probably the path of least resistance.
I WILL work on having a version to host it where users have to bring their own keys to see it in the future though
How long before we see this UI in some Iran related news story
first llm to stop using those damn colors for every single transparent modal in existence is going to be a big step forward.
And add chronological feeds of govtrack.us along with all politicians social media feeds
edit: no idea why they deleted the comment but they linked to this video https://www.youtube.com/watch?v=0p8o7AeHDzg
As was already said in one of the reference videos, it's impressive what one person can do.
But the next step is to define an architecture where authors can defined/implement plug-ins with particular modular capabilities instead of one big monolith. For example, instead of front-end (GUI) and back-end (feeds), there ought to be a middle layer that models some of the domain logic (events: surces, filters, sinks; stories/time lines etc.).
I would like to see a plug-in for EMM (European Media Monitor) integrated, for instance ( https://emm.newsbrief.eu/NewsBrief/alertedition/en/ECnews.ht... ).
Everyone has their own hueristic, but if it took someone 6 hours or whatever to make some whole big app, my confidence that they will continue to maintain or care about it even next week is pretty much zero... How could they? They've already made three other apps in that time!
I don't care if the code is perfect, all this stuff just has the feel of plastic cutlery, if that makes sense.
Of course it's commoditized and a dime-a-dozen today, but if this is what HN terms as "AI slop" then apparently human SWEs weren't that much better.
Nobody here is at fault, we're in very trying times - we need to adjust with patience and consideration.
Use of AI to launch rapid prototypes is like breadboarding a new product. It has a place but it's moving so fast that it's hard to lock down at the moment.
No point everyone throwing excess cortisol in this direction. <3
If it wasn't clear, I think we're (as a society) destroying ourselves by believing in all this generative AI crap, even contrary to the evidence of how wrong it often is, the hallucinations, the awful quality etc.
I think we're witnessing the death of intellect: when you discard the evidence in favor of something that only looks right but is nonsense, there's no telling where it will end. If your profession requires you to think and produce output accordingly, but suddenly nobody thinks wrong answers matter, then your profession no longer exists.
Standing up against it and refusing to accept any form of AI anywhere is the only reasonable thing to do. And I don't know if it will make a difference.
It's only slop because anyone can make it now and we're all sick of clones.
The app is good, but the effort required to make it is not impressive at all. I think calling this slop is a misnomer. It's not slop. It's better than what most of us can do and done in a significantly faster amount of time. Calling it slop implies you can do better... which you can't.
Not saying the AI slop noise isn’t annoying though.
If you want "feedback" of the same quality and effort as the project itself, you can always go ask your beloved AI for feedback instead of wasting precious human time.
If I’m driving an AI towards finding a solution, would it be any different for a software project?
Never mind the fact that AIs of the LLM-variety haven't and aren't going to find solutions to mathematical problems.
This is empirically wrong as of early 2026.
Since Christmas 2025, 15 Erdos problems have been moved from "open" to "solved" on erdosproblems.com, 11 of them crediting AI models. Problems #397, #728, and #729 were solved by GPT-5.2 Pro generating original arguments (not literature lookups), formalized in Lean, and verified by Terence Tao himself. Problem #1026 was solved more or less autonomously by Harmonic's Aristotle model in Lean.
At IMO 2025, three separate systems (Gemini Deep Think, an OpenAI system, and Aristotle) independently achieved gold-medal performance, solving 5 of 6 problems.
DeepSeek-Prover-V2 hits 88.9% on MiniF2F-test. Top models solve 40% of postdoc-level problems on FrontierMath, up from 2%.
Tao's own assessment as of March 2026: AI is "ready for primetime" in math and theoretical physics because it "saves more time than it wastes."
You can disagree about where this is heading, but "haven't and aren't going to" doesn't survive contact with the data.
So, not autonomously.
Also how does getting into the specifics of which type of AI can solve mathematical problems helps the comparison here?
If you think you made "cool stuff" with AI, great, enjoy it, but also please keep it to yourself because anyone else can generate the exact same thing if they want it, you are not special, and are actively downing out real human effort and passion.
performance is easy. you can craft a test suite that will allow a ralph loop to iterate until it hits the metrics.
the hard part of style/feel/usability. LLMs still suck at that stuff, and crafting tests to produce those metrics is nigh impossible.