With that said, just because we don't have a great way of measuring it doesn't mean that we should assume LLMs are intelligent. An LLM is code and a massive collection of training weights. It has no means of observing and reasoning about the world, doesn't store memories the same way that organic brains do (and is in fact quite limited in this aspect). It currently isn't able to solve a problem it hasn't encountered in its training data, or produce novel research on a topic without significant handholding. Furthermore, the frequent errors made by it suggests that it fundamentally does not understand the words that it spits out.
Not really sure what you mean by your anesthesiology comment. Being able to intubate and inject propofol does not make you more of an expert on consciousness than neuroscientists and neurologists.
But then they came up with the whole "Reasoning model" paradigm and that contains obvious feedback loops. So now just throw my hands in the air because I think no one really knows or can tell for sure. We are all clueless here.
I can really recommend this book by Douglas Hofstadter: https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop
The only thing you can really tell is "I perceive myself in some sort of feedback loop manner". Which to me it even sounds like it has "arisen" from underlying mechanisms.