I don't go into the shop and wander about until I find something that looks like it, then stand there pointing things going "THAT!" until someone figures out what I mean.
And now I have a T50 Torx bit that I can stick on a ratchet with a long extension and get the passenger seat out of the Range Rover so I can retrieve my daughter's favourite necklace from where it's gotten entangled with the wiring to the gearbox and suspension ECUs in a place where I can see it with a dentist's mirror but can't actually get a grabber onto to fish it out, worse luck.
So that's my afternoon sorted then. Because we're not just hacking on computers round here.
The enterprise tools I am currently working with often have outdated screenshots in their own documentation.
Sure, GUI is more accessible to the average users, but all the tasks in the article aren't going to be done by the average user. And for the more technical users, having to navigate System Settings to find anything is like Dr. Sattler plunging her arms into a pile of dinosaur dung.
Because it's whole point is that it's a graphical OS.
If you used just cli unix userland, might as well use Linux.
But people using OSX often also know the commandline quite well - at the least better than most windows users. I saw this again and again in university.
BLASPHEMY
Why is there this massive disparity in experience? Is it the automatic routing that ChatGPT auto is doing? Does it just so happen that I've hit all the "common" issues (one was flashing an ESP32 to play around with WiFi motion detection - https://github.com/francescopace/espectre) but even then, I just don't get this "ChatGPT is shit" output that even the author is seeing.
And they don’t provide the prompt, so you can’t really verify if a proper model has the same issues.
As noted, terminal commands can be ridiculously powerful, and can result in messy states.
The last time I asked an LLM for help, was when I wanted to move an automounted disk image from the internal disk to an external one. If you do that, when the mount occurs, is important.
It gave me a bunch of really crazy (and ineffective) instructions, to create login items with timed bash commands, etc. To be fair, I did try to give it the benefit of the doubt, but each time its advice pooched, it would give even worse workarounds.
One of the insidious things, was that it never instructed to revert the previous attempt, like most online instruction posts. This resulted in one attempt colliding with the previous ineffective one, when I neglected to do so, on my own judgment.
Eventually, I decided the fox wasn’t worth the chase, and just left the image on the startup disk. It wasn’t that big, anyway. I made sure to remove all the litter from the LLM debacle.
Taught me a lesson.
> “A man who carries a cat by the tail learns something he can learn in no other way.“
-Mark Twain
Since it's all statistics under the LLM hood, both of those cause proven CLI tools to have strong signals as being the right answer.
I wonder why?
Maybe because that's where the basic tools live.
UIs have better visual feedback for "Am I about to do the right thing?".
But with the AI, there's a good chance it has it correct, and a good chance it'll just be copy/pasted or even run directly. So the risk is reduced.
At least. If I am not able to follow along step by step to point at their screen and the relative position of buttons. Even more so if the person I'm talking to is clueless to provide and interpret context.
This further solidifies my view that LLMs will not achieve AGI by refuting the oft repeated popsci argument that human brains predict the next word in a sentence just like LLMs.
Also, languages made up of tokens are still languages, in fact most academics would argue all languages are made up of tokens.
Anyway, it's not LLM's that achieve AGI, it's systems built around LLM's that achieved AGI quite some time ago.
Are you trying to tell me that a Large LANGUAGE Model is better at text than at pictures? What are you going to tell me next? That the sidewalk is hot on a sunny day?
It won’t be as fast to go through them than just pasting some commands but if that’s what the user prefers…
It would be nice if this was mentioned transparently in the beginning of article.
I mean - new models also tell you to use the terminal, but the quality is incomparable to what the author is using.
edit: ChatGPT talked me recently through Linux Mint installation on two old laptops I have at home where Mint didn't detect existing Windows installation (which I wanted to keep), don't think anyone on Reddit or elsewhere would be as fast/patient as ChatGPT, it was mostly done by terminal commands, one computer was easy, the other had already 4 partitions and FAT32, so it took longer
However had, I use the terminal all the time. It is the primary user interface to me to get computers to do what I want; in the most basic sense I simply invoke various commands from the commandline, often delegating onto self-written ruby scripts. For instance "delem" is my commandline alias for delete_empty_files (kept in delete_empty_files.rb). I have tons of similar "actions"; oldschool UNIX people may use some commandline flags for this. I also support commandline flags, of course, but my brain works best when I keep everything super-simple at all times. So I actually do not disagree with AI here; the terminal is efficient. I just don't need AI to tell me that - I knew that before already. So AI may still be stupid. It's like a young over-eager kid, but without a real ability to "learn".
Am I the only one who thinks like this?
By "few" you mean "few Gen-Zs?"
For UI you need to figure out different locales, OS versions, etc.
But at least TFA wrote up the criticism in text, even transcribing some of the screenshots.
Automating terminal commands is easy, because that is how the OS works anyways. All programs invoke each other by issuing (arrays of) strings to the OS and telling it to exec this.