Every major RSS reader supports folders. Your problem is that you engage with RSS as if it were a social media feed, with it's single monolithic reverse-chronological feed.
Just don't do that. Stick all the high volume news feeds in a folder, and you can skim read the headlines & hit "mark all as read" once you're done or for whatever other reason don't want to look at the news anymore.
Stick the low volume things you care about in their own folder, and those will remain unread, in their own ordering for you to read at your own leasure.
Even for sites that don't offer granular feeds, every major feed reader offers filtering options, a lot of them offer fairly complex regex filtering.
> This is what lead to algorithm based filtering.
Feed aggregators (and most social media) exist because of discoverability, finding new stuff from new people you hadn't heard about before.
> With agent based approaches, you control the algorithm. That wasn't possible in the past. LLMs can summarize, aggregate, categorize, group, filter, etc.
You'd be spending tens of dollars of compute on something that every major RSS client was doing back in 2006 with the equivalent of less than a single penny worth of current day compute.
I think the open web needs to come back, but in a fair way for everyone, giving readers control over their feeds while also sending traffic and comments back to the original sources. Not quite sure how to do that yet.
https://aws.amazon.com/blogs/machine-learning/use-language-e...