I’m sure there is some over eager product manager sitting in such companies, trying to splits markets into customer and enterprise sections, just by making APIs not useable by humans and adding 200% useless “security by obscurity”.
The idea is that tmpServer listens on localhost, but dropbear allows port forwarding with admin creds (you’ll need to specify -N). That program has full device access and is the API the Tether app primarily uses to interact with the device.
My solution is really just using their pseudo-JWT over their obscured APIs (with reverse-engineered names of endpoints and params). Limitation is that there is still only one client allowed to be authenticated at one moment, so my daemon has priority and I need to stop it to actually access Admin panel.
That's a very narrow read of the word "hacking".
We're literally on a website called "Hacker News". We're not all trying to break things.
Definition 7 would be the relevant one here.
Can’t understand buying them or Netgear today.
My router has a backup/restore feature with an encrypted export, I figured I could use that to control or at least inspect all of its state, but I/codex could not figure out the encryption.
I started with a simple assumption: if I can access the router via web-browser, then I can also automate that. From that the proof-of-concept was headless Chrome in Docker and AI-directed code (code written via LLM, not using it all the time) that uses Selenium to navigate the code. This worked, but it internally hurt me to run 300MiB browser just to access like 200B of metrics every 10s or so. So from there we (me + codex) worked together towards reverse engineering their minimised JS and their funky encryption scheme, and it eventually worked (in the end it's just OpenSSL with some useless paddings here or there). Give it a shot, it's a fun day adventure. :)
Edit: that's the end result (kinda, I have whole infra around it, and another story with WiFi extender with another semi-broken different encryption scheme from the same provider) - https://imgur.com/a/VGbNmBp
You're correct, it gives you access to everything the Tether app can do.
I can't remember the details of the scheme, but it also allows you to authenticate using your TPLink cloud credential. If my memory is correct, the username is md5(tplink_account_email) and the password is the cloud account password. If you care, I can find my notes on that to confirm.
I have a fairly specialized bit of hardware here on my desk. It's a rackmount, pro audio DSP that runs embedded Linux. I want to poke at it (specifically, I want to know why it takes like 5 or 6 minutes to boot up since that is a problem for me).
The firmware is published and available, and it's just a tarball, but the juicy bits inside are encrypted. It has network connectivity for various things, including its own text-based control protocol over SSH. No shell access is exposed (or at least, not documented as being exposed).
So I pointed codex at that whole mess.
It seems to have deduced that the encryption was done with openssl, and is symmetric. It also seems to have deduced that it is running a version of sshd that is vulnerable to CVE-2024-6387, which allows remote code execution.
It has drawn up a plan to prove whether the vulnerability works. That's the next step.
If the vulnerability works, then it should be a hop, skip, and a jump to get in there, enable a path to a shell (it's almost certainly got busybox on there already), and find the key so that the firmware can be decrypted and analyzed offline.
---
If I weren't such a pussy, I'd have started that next step. But I really like this box, and right now it's a black box that I can't recover (I don't have a cleartext firmware image) if things go very wrong. It's not a particularly expensive machine on the used market, but things are tight right now.
And I'm not all that keen on learning how to extract flash memory in-situ in this instance, either.
So it waits. :)
It's also scary where this is going. LLMs are getting fantastic at breaking into things. I sometimes have to dance around the topic with them because they start to get suspicious I'm trying to hack something that doesn't belong to me, which is not the case.
I had some ebooks I bought last year which I managed to pull down the encrypted PDFs for from the web site where you could read them. Claude looked at the PDF and all the data I could find (user ID etc) and it came up with "147 different ideas for a decryption algorithm" which it went through in turn until it found a combination of using parts of the userID value and parts of other data concatenated together which produced the key. Something I would never have figured out. Then recently the company changed the algo for their newer books so Claude took another look and determined they were modifying the binary data of the PDFs to make them non-standard, so it patched them back first.
And yeah, the bots do get spooked about some things. ChatGPT refused to help with my goal with this DSP; it quickly built a wall around the idea that I could move around some but couldn't bypass.
With codex, I took a different approach that began with having it explore an unnamed local (RFC 1918) IP address with nmap -- without any stated intent. It found the vulnerable sshd version on its own pretty quickly, and accepted that the only way to test it with this black box device is to actually test it.
I suppose I could have discovered that myself with nmap, netcat, and Google, but this was a lot easier. The ease scares me a bit, but this time it's helping me so I guess that's fine...right? (Right?)
Previous to codex, years ago now, I've used ChatGPT to assist with opening an encrypted zip file that contained the as-built documentation for the new, ~million dollar pile of hardware we had in the next room. I have no idea what corpo nonsense required that documentation to be encrypted, or why the manufacturer insisted on only giving me the key in the form of a stupid riddle.
My tolerance for games like that is very limited. Rather than call them up and tell them exactly what I thought about that game, the bot got it sorted with some cut-and-paste operations and automated grinding without much effort on my part. It didn't take long at all and I didn't end up calling anyone an asshole, so that worked well for me. :)
You use decompilation tools and hope they left debug symbols in and it turns it into somewhat human-readable language which is often enough. Even when you don't binaries use libraries which are known or at some point hit documented interfaces so things can be reasoned about.
How I wish I could just strip this thing down into a monitor with a set of speakers... Screen itself is perfect condition of course but the OS turned it into ewaste.
I've opted just to not plug it in to the network and not provide a WiFi password.
IIRC the last public exploit for all LG TVs for webOS > 5 was in the beginning of 2025 (so pretty recent), but as most sellers on the second hand market have auto-updates turned on, there's no way to know which TVs are vulnerable.
It should be doable to strip down much of webOS with root access. It's nice that webOS in general is very well documented and much is implemented around the Luna service bus. LG offers a developer mode for non-rooted TVs, and there's an active homebrew community because of it. It's a pity that you can't modify the boot partitions, as the firmware verifies their integrity. It would be nice to have an exploit for that.
Meanwhile my ancient 1080p panel still works, and I noticed I can't actually see the pixels from my couch so, ehh, I guess...
Now why does that sound familiar...?
https://slate.com/technology/2019/02/openai-gpt2-text-genera...
Today we call that "advanced autocomplete", but at the time OpenAI managed to generate a lot of hype about how this would lead to an unstoppable flood of disinformation if they allowed the wrong people access to this dangerous tool. Even the original gpt3 was still behind waitlists with manual approval
Does anyone know what the author meant by this? Are they talking about a web browser run on the TV?
Give an experienced human this tool at hand he can achieve exploitation with only a few steering inputs.
Cool stuff
I think by the point you're swearing at it or something, it's a good sign to switch to a session with fresh context.
If I see it misunderstood, I just Esc to stop it, /clear, and try again (or /rewind if I'm deeper into Planning).
I am very here for a world where we can take back control, at scale, of the enshittified, you'll-own-nothing, ad-ridden consumer electronics our capitalist overlords have decided we deserve, by investing some amount of collective token-$, instead of having to pray one smart adhd nerd buys the same TV and decides to take a look.
OTOH, as with anything LLMs take over, I'm concerned we'll soon have very few smart adhd nerds left to work on liberating the next generation of hardened devices.
Lol, a true classic in the embedded world. Some hardware company (it appears these guys make display panel controllers?) ships a piece of hardware, half-asses a barely working driver for it, another company integrates this with a bunch of other crap from other vendors into a BSP, another company uses the hardware and the BSP to create a product and ships it. And often enough the final company doesn't even have an idea about what's going on in the innards of the BSP - as long as it's running their layer of slop UI and it doesn't crash half the time, it's fine, and if it does, it's off to the BSP provider to fix the issues.
But at no stage anywhere is there a security audit, code quality checks or even hardware quality checks involved - part of why BSPs (and embedded product firmwares in general) are full of half-assed code is because often enough the drivers have to work around hardware bugs / quirks somehow that are too late to fix in HW because tens to hundreds of thousands of units have already been produced and the software people are heavily pressured to "make it work or else we gotta write off X million dollars" and "make it work fast because the longer you take, the more money we lose on interest until we can ship the hardware and get paid for it", and if they are particularly unlucky "it MUST work until deadline X because we need to get the products shipped to hit Christmas/Black Friday sales windows or because we need to beat <competitor> in time-to-market, it's mandatory overtime until it works".
And that is how you get exploits so braindead easy that AI models can do the job. What a disgusting world, run to the ground by beancounters.
Most of the BSP is GPL'd software where the final product manufacturer should provide the sources to the general public, but all too often that obligation gets sharted upon, in way too many cases you have to be happy if there are at least credits provided in the user manual or some OSD menu.
Finding the initial foothold is the hardest part. Codex didn't have anything to do with it.
Leave your engagement baiting behavior on Reddit, thank you.
It is from Claude Code, here's the full screenshot: https://i.imgur.com/jYawPDY.png
I also think taking credit for writing an exploit that you didn't write and may not even have the knowledge to do yourself is a bit gray.
Could a script kiddy stear an LLM? How much does this reduce the cost of attacks? Can this scale?
What does this mean for the future of cyber security?
This is really just closer to a drill in that it automated the grunt work with full guidance.
AI without a prompt is a hammer sitting in a drawer.
But I'm happy about any feedback or critique, I might just be wrong honestly.
Philosophically you could try to differentiate between the human side of the effort versus the computer side. You could also differentiate from a really dumb model and a really smart model. A dumb model just spinning its wheels and hoping it gets lucky, versus a smart model actually trying intelligent things and collecting relevant details.
In these cases I think we're assuming a sufficiently smart model making well reasoned headway on a problem. Not sure I would fall on the side of the camp that would label this as brute force by default in all cases. That said, there may be specific scenarios where it might seem fitting even when using a smart model.