upvote
I agree, LLMs have biases. It was my primary desire to build this tool and put the weight on the LLMs to synthesize rather than think about and interpret the subjects. It's actually the main goal of this tool - maybe I don't articulate that as well as I could be - I'm open to suggestions here!

I agree to first figuring out which research is most important and reliable. There is a planning stage, to consider the sources and which ones hold credibility.

In addition, the user has full control over the sources the tool uses, and even add their own (MCP tools).

In addition, being open source, you have full control over the flow/prompts/source methods/etc and as a result can optimize this yourself and even contribute improvements to ensure this benefits research as a whole.

I welcome your feedback, and any code amendments you propose to improve the tool. You clearly understand what makes good research and your contributions will be highly valued by all of us.

reply
By having to defend your (thesis)/work like this, the whole piece is getting lifted into academic heights in a way, so you could as well keep calling its result and process research :)

What description would itself come up with, BTW?

When you anwer with "I agree, LLMs have biases.", I immediately suspect that to be an LLM calming me after correcting it, though. So, the world has definitely changed and we might need to allow for correcting the broadness of words and meanings.

After all you did not write thesis, scientific research or similar and I remember it being called researching when people went looking up sources (which took them longer than an agent or LLM these days). Compressing that into a report might make it a review, but anyway. Great that you assembled a useful work tool here for some who need exactly that.

reply
I'm very curious. As it's main purpose is built to really search and decide balanced/unbias sources and not really have an opinion, what the result would be of such a question. I'm curious if it will give an answer on this. I just gave it the request:

"I am struggling what to call this other than "Deep Research tool" as really it is looking online and scanning/synthesizing sources (that's you, by the way!). With that in mind, someone suggested "literature review" but it makes me think of books. I wonder if you can see what this kind of "research" is and suggest a name to describ it based on all the information you uncover on what good research looks like."

Let's see how it gets on...

Also, something I think about a lot (you sound like a deep thinker!) - when we discover something that is untrue, can it make it true? (purely hypothetical thought)... if 1000 people were told coffee was bad for them, does the mind-body connection take over and amplify this into reality. We are certainly in interesting times!

reply
Ha, hoc - it was quite interesting to see, and learn a bit about this.

Apparently the suggested term is "Digital Information Synthesis"

You can see the report here: https://docs.google.com/document/d/1Vg_UWUPelWohzVGduaKY7Czd...

This was quite an interesting use case, thanks!

reply
I updated the Readme now, to describe as "CleverBee: AI-Powered Online Data Information Synthesis Assistant" and put emphasis on the synthesis.

Also, put a new section in place:

What cleverb.ee is not Cleverb.ee is not a replacement for deep domain expertise. Despite explicit instructions and low temperature settings, AI can still hallucinate. Always check the sources.

reply
Did hoc intentionally pun by writing that this meta analysis is getting “lifted”?

Reference: https://legacy.reactjs.org/docs/higher-order-components.html

reply
> I welcome your feedback, and any code amendments you propose to improve the tool. You clearly understand what makes good research and your contributions will be highly valued by all of us.

This bit is worded in a way that feels manipulative. Perhaps it’s why your comment is being downvoted. Regardless, I’ll give you the befit of the doubt and believe you’re being honest and replying in good faith; my genuine intentions have been misinterpreted in the past too, and I don’t wish to do it to another.

I won’t propose any code improvements, because I don’t believe projects like yours are positive to the world. On the contrary, this over-reliance on LLMs and taking their output as gospel will leave us all worse off. What we need is the exact opposite, for people to be actively aware of the inherent flaws in the system and internalise the absolute need to verify.

reply
I'd like to humor you a bit on what you say (going off on a little tangent here).

- Were all the existing sources (e.g. news, podcasts, etc) ever reliable? - Do people lobby for certain outcomes on some research/articles?

And finally...

- Now we know LLMs hallucinate, and news can easily be faked, are people finally starting to question everything, including what they were told before?

Of course, mostly rhetorical but I think about this a lot - if it is a good or bad thing. Now we know we are surrounded by fakeness that can be generated in seconds, maybe people will finally gain critical thinking skills, and the ability to discern truth from falseness better. Time will tell!

For now the way I see it is people are becoming reliant on these tools, and only will a community of people collaborating to better the outcomes can ensure alternate agendas do not lead the results.

reply
> Were all the existing sources (e.g. news, podcasts, etc) ever reliable?

No, of course not. And I would deeply appreciate if you stopped arguing with straw men. When you do it repeatedly you are either arguing in bad faith or unable to realise you’re doing so, neither of which is positive. Please engage with the given argument, not a weaker version designed to be attacked.

But I’ll give you the benefit of the doubt once more.

Provenance matters. I don’t trust everyone I know to be an expert on every subject, but I know who I can trust for what. I know exactly who I can ask biology, medical, or music questions. I know those people will give me right answers and an accurate evaluation of their confidence, or tell me truthfully when they don’t know. I know they can research and identify which sources are trustworthy. I also know they will get back to me and correct any error they may have made in the past. I can also determine who I can dismiss.

The same is true for searching the web. You don’t trust every website or author, you learn which are trustworthy.

You don’t have that with LLMs. They are a single source that you cannot trust for anything. They can give you different opposite answers to the same query, all with the same degree of confidence. And no, the added sources aren’t enough because not only are those often wrongly summarised, even stating the opposite, most people won’t ever verify them. Not to mention they can be made to support whatever point the creators want.

> Now we know LLMs hallucinate, and news can easily be faked, are people finally starting to question everything, including what they were told before?

No, they are not. That is idyllic and naive and betrays a lack of attention to the status quo. People are tricked and scammed every day by obviously AI pictures and texts, and double down on their wrong beliefs even when something is proven to have been a lie.

reply