For me, it started when I spent a year and a half reading and digesting books for and against young earth creationism, then eventually for Christianity itself (its historical truth claims). It struck me that the books were just a serialization of some knowledge structure that existed in the authors’ heads, and by reading I was trying to recreate that structure in my own head. And that’s a super inefficient way to go about this business. So there must be a shortcut, some more powerful intermediate representation than just text (text is too general and powerful, and you can’t compute over it… until now with LLMs?)
That graph felt a lot like code to me: there’s no unique representation of knowledge in a graph, but there are some that are much more useful than others; building a well-factored graph takes time and taste; graphs are composable and reusable in a way that feels like it could help you discover layers of abstraction in your arguments.
I do think there's quite a lot that could be done with LLM assistance here, like finding "duplicate" candidates; statements with the same semantic meaning, for potential merge. It's really complicated to think through side effects though so I'm going slow. :)
Nurture this, it will become a great tool in the belt for a lot of people
I'm considering using a Neo4j self hosted instance for a project, but having only played around with it in low-stakes + small-data toy projects, I'm not really familiar with the footguns and failure modes...
All that aside, plugging holes in a sinking database for six months because you can't come to a descision does not sound like a fun time :D
I'm also a sucker for serif fonts so points for that.
What if you could sell the data for each argument? That might be valuable to LLM labs, because then you can essentially guarantee that every single argument you provide is human checked, and you could accumulate a large DB of those. Of course you'll never be able to capture every single argument possible, but it's rather a mechanism that would allow incremental improvement with time. But codifying logic and natural language is a very nice idea.
I am interested in seeing a personal version of this. Help people work out their own brain knots to make decision-making easier. I'm actually decent at mending fences with others. Put making decisions myself? Impossible.
I've actually had a lot of fun hooking it up to LLM. I have a private MCP server for it. The tools tell it how to read a concludia argument and validate it. It's what generated all the counterpoints for the "carbon offset" argument (https://concludia.org/step/9b8d443e-9a52-3006-8c2d-472406db7...) .
And yeah... when I've tried to fully justify my own conclusions that I was sure were correct... it's pretty humbling to realize how many assumptions we build into our own beliefs!