I now scroll any AI-adjacent article I see and just read headings and if I see this I know what I'm getting into:
The Dexterity Deadlock
The Problem
The Geometric Curse
The Sim-to-Real Gap
The Structural Gap f(⋅)
Seeing It in Motion
The N^2 Impedance Mismatch
The Chaos Term ϵchaos
The Information Wall
The Weakest Link
Why Manipulation Needs Better
What We Built
From 288 to 15
Does It Work?
Hardware Validation
Robot Hand Landscape
The Take-Home
The fundamentals of an LLM is to statistically match their output with the corpus. The tics they have are really common in natural human usage too.
In this day and age, I wish people would ask any model OTHER than ChatGPT to rewrite their shit. At least we'd get a different flavor of slop.