"... to accomplish what?", is a damn reasonable follow-up, and ends (telos) is something the same Greeks discussed quite extensively.
Modern treatments have tried to skip over this discussion, and derive moral arguments not based on an explicit ends. Problem being they still smuggle in varying choices of ultimate ends in these arguments, without clearly spelling them out, opting to hand-wave about preferences instead.
As such this question is often glossed over in modern ethical discussion, and disagreements about moral ends is the crux of what leads to differing conclusions about what is ethical.
Is it to maximized your own happiness like Aristotle would argue, or the prosperity of the state, or the salvation of the soul, or to maximize honor, or to minimize suffering, or to minimize injustice, or to elevate the soul, or to maximize shareholder value, or to make the as world beautiful as possible, or something else?
If you fundamentally disagree about what our goal should be, you're very unlikely to agree on the means to accomplish the goal.
I think what you mean is you've never found a rule you personally prefer more, based purely on vibes. Which is all moral knowledge can ever be.
It's easy to argue against the golden rule anyway, from many angles, depending on your first principles.
The simplest is: How I would like to be treated is not necessarily how they would like to be treated.
In this "original position", their position behind the "veil of ignorance" prevents everyone from knowing their ethnicity, social status, gender, and (crucially in Rawls's formulation) their or anyone else's ideas of how to lead a good life.
Both have problems.
The rules we go by are based on our strengths and weaknesses. They can at most apply to ourselves, and to other forms of life that share certain things with us. Such as feeling pain, needing to sleep, to eat, needing help, needing to breathe air, these generate what we feel as "fear" based on biology etc. You cannot throw these kinds of values on AI, or AGI, as it will possess a wildly different set of strengths and weaknesses to us humans.
Even in human relations it’s dangerous. I for one don’t want to be treated the same way someone into BDSM wants to be treated. I don’t want to avoid cooking or turning the lights on (or off!) on a Friday night but others are quite happy with that.
If you assign that morality to a species that isn’t the same as you that’s a problem. My guinea pig wants nothing more from like than hay, nuggets, sole room to run around and some shelter from scary shapes. If they were in charge of the world life would be very different.
“Live and let live” might be a similar theme but not as problematic, but then how do you define “living”. You can keep someone alive for decades while torturing them.
How about allowing freedom? Well that means I’m free to build a nuclear bomb. And set it off where I want. We see today especially that type of freedom isn’t really liked.
Due to the complexity of our reality a lot of things find themselves on a spectrum, but in numbers things are pretty clear.
In order of priority, if possible while maintaining the health and safety of yourself and your loved ones:
- Treat others as THEY wish to be treated
- Treat others as YOU would wish to be treated in their situation
- Treat others with as much kindness and compassion as you can safely afford
When we are safe, we can do BETTER than the Golden Rule. We also have to admit that safety is a requirement that changes expectations.
I have to give credit to Dennis E Taylor's "Heaven's River" for this root idea.