Why are we expecting AGI to one shot it? Can't we have an AGI that can fails occasionally to solve some math problem? Is the expectation of AGI to be all knowing?
By the way I agree that AGI is not around the corner or I am not arguing any of the llm s are "thinking machines". It's just I agree goal post or posts needs to be set well.