On API pricing you still pay 10% of the input token price on cache reads. Not sure if the subscription limits count this though.
And of course all conversations now have to compact 80 tokens earlier, and are marginally worse (since results get worse the more stuff is in the context)